Oct 8 19:50:18.882355 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 19:50:18.882376 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:50:18.882387 kernel: BIOS-provided physical RAM map: Oct 8 19:50:18.882394 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 19:50:18.882400 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 19:50:18.882406 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 19:50:18.882413 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 8 19:50:18.882420 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 8 19:50:18.882426 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 19:50:18.882434 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 8 19:50:18.882444 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 8 19:50:18.882450 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 19:50:18.882456 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 8 19:50:18.882463 kernel: NX (Execute Disable) protection: active Oct 8 19:50:18.882470 kernel: APIC: Static calls initialized Oct 8 19:50:18.882480 kernel: SMBIOS 2.8 present. Oct 8 19:50:18.882489 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 8 19:50:18.882496 kernel: Hypervisor detected: KVM Oct 8 19:50:18.882503 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 19:50:18.882509 kernel: kvm-clock: using sched offset of 2954977317 cycles Oct 8 19:50:18.882516 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 19:50:18.882524 kernel: tsc: Detected 2794.748 MHz processor Oct 8 19:50:18.882531 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 19:50:18.882538 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 19:50:18.882545 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 8 19:50:18.882554 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 19:50:18.882561 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 19:50:18.882568 kernel: Using GB pages for direct mapping Oct 8 19:50:18.882575 kernel: ACPI: Early table checksum verification disabled Oct 8 19:50:18.882582 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 8 19:50:18.882709 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:50:18.882721 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:50:18.882730 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:50:18.882744 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 8 19:50:18.882751 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:50:18.882758 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:50:18.882765 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:50:18.882772 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:50:18.882780 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Oct 8 19:50:18.882787 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Oct 8 19:50:18.882801 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 8 19:50:18.882811 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Oct 8 19:50:18.882818 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Oct 8 19:50:18.882826 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Oct 8 19:50:18.882833 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Oct 8 19:50:18.882840 kernel: No NUMA configuration found Oct 8 19:50:18.882848 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 8 19:50:18.882855 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 8 19:50:18.882865 kernel: Zone ranges: Oct 8 19:50:18.882872 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 19:50:18.882880 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 8 19:50:18.882887 kernel: Normal empty Oct 8 19:50:18.882894 kernel: Movable zone start for each node Oct 8 19:50:18.882901 kernel: Early memory node ranges Oct 8 19:50:18.882908 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 19:50:18.882916 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 8 19:50:18.882923 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 8 19:50:18.882935 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:50:18.882942 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 19:50:18.882949 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 8 19:50:18.882956 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 19:50:18.882964 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 19:50:18.882971 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 19:50:18.882978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 19:50:18.882985 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 19:50:18.882992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 19:50:18.883002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 19:50:18.883009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 19:50:18.883016 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 19:50:18.883024 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 19:50:18.883031 kernel: TSC deadline timer available Oct 8 19:50:18.883038 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 8 19:50:18.883045 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 19:50:18.883052 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 8 19:50:18.883059 kernel: kvm-guest: setup PV sched yield Oct 8 19:50:18.883071 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 8 19:50:18.883078 kernel: Booting paravirtualized kernel on KVM Oct 8 19:50:18.883086 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 19:50:18.883093 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 8 19:50:18.883100 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 8 19:50:18.883107 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 8 19:50:18.883114 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 8 19:50:18.883121 kernel: kvm-guest: PV spinlocks enabled Oct 8 19:50:18.883128 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 19:50:18.883139 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:50:18.883147 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:50:18.883154 kernel: random: crng init done Oct 8 19:50:18.883161 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:50:18.883168 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:50:18.883176 kernel: Fallback order for Node 0: 0 Oct 8 19:50:18.883183 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 8 19:50:18.883190 kernel: Policy zone: DMA32 Oct 8 19:50:18.883200 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:50:18.883207 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 136900K reserved, 0K cma-reserved) Oct 8 19:50:18.883214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:50:18.883221 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 19:50:18.883229 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 19:50:18.883236 kernel: Dynamic Preempt: voluntary Oct 8 19:50:18.883243 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:50:18.883251 kernel: rcu: RCU event tracing is enabled. Oct 8 19:50:18.883258 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:50:18.883268 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:50:18.883275 kernel: Rude variant of Tasks RCU enabled. Oct 8 19:50:18.883283 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:50:18.883292 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:50:18.883299 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:50:18.883306 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 8 19:50:18.883314 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:50:18.883321 kernel: Console: colour VGA+ 80x25 Oct 8 19:50:18.883328 kernel: printk: console [ttyS0] enabled Oct 8 19:50:18.883335 kernel: ACPI: Core revision 20230628 Oct 8 19:50:18.883345 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 19:50:18.883352 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 19:50:18.883359 kernel: x2apic enabled Oct 8 19:50:18.883366 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 19:50:18.883374 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 8 19:50:18.883381 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 8 19:50:18.883388 kernel: kvm-guest: setup PV IPIs Oct 8 19:50:18.883406 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 19:50:18.883414 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 19:50:18.883421 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 8 19:50:18.883429 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 19:50:18.883439 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 19:50:18.883446 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 19:50:18.883454 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 19:50:18.883461 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 19:50:18.883469 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 19:50:18.883478 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 19:50:18.883486 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 19:50:18.883494 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 19:50:18.883503 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 19:50:18.883512 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 19:50:18.883521 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 19:50:18.883530 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 19:50:18.883539 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 19:50:18.883549 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 19:50:18.883557 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 19:50:18.883564 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 19:50:18.883572 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 19:50:18.883579 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 19:50:18.883601 kernel: Freeing SMP alternatives memory: 32K Oct 8 19:50:18.883614 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:50:18.883622 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:50:18.883629 kernel: landlock: Up and running. Oct 8 19:50:18.883640 kernel: SELinux: Initializing. Oct 8 19:50:18.883648 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:50:18.883662 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:50:18.883670 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 19:50:18.883677 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:50:18.883685 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:50:18.883695 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:50:18.883703 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 19:50:18.883710 kernel: ... version: 0 Oct 8 19:50:18.883720 kernel: ... bit width: 48 Oct 8 19:50:18.883728 kernel: ... generic registers: 6 Oct 8 19:50:18.883735 kernel: ... value mask: 0000ffffffffffff Oct 8 19:50:18.883743 kernel: ... max period: 00007fffffffffff Oct 8 19:50:18.883750 kernel: ... fixed-purpose events: 0 Oct 8 19:50:18.883757 kernel: ... event mask: 000000000000003f Oct 8 19:50:18.883765 kernel: signal: max sigframe size: 1776 Oct 8 19:50:18.883772 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:50:18.883780 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:50:18.883790 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:50:18.883797 kernel: smpboot: x86: Booting SMP configuration: Oct 8 19:50:18.883805 kernel: .... node #0, CPUs: #1 #2 #3 Oct 8 19:50:18.883812 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:50:18.883819 kernel: smpboot: Max logical packages: 1 Oct 8 19:50:18.883827 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 8 19:50:18.883834 kernel: devtmpfs: initialized Oct 8 19:50:18.883842 kernel: x86/mm: Memory block size: 128MB Oct 8 19:50:18.883849 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:50:18.883860 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:50:18.883867 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:50:18.883875 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:50:18.883882 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:50:18.883890 kernel: audit: type=2000 audit(1728417018.925:1): state=initialized audit_enabled=0 res=1 Oct 8 19:50:18.883898 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:50:18.883905 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 19:50:18.883913 kernel: cpuidle: using governor menu Oct 8 19:50:18.883920 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:50:18.883930 kernel: dca service started, version 1.12.1 Oct 8 19:50:18.883938 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 19:50:18.883945 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 8 19:50:18.883953 kernel: PCI: Using configuration type 1 for base access Oct 8 19:50:18.883960 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 19:50:18.883968 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:50:18.883975 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:50:18.883983 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:50:18.883990 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:50:18.884000 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:50:18.884008 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:50:18.884015 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:50:18.884023 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:50:18.884030 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:50:18.884037 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 19:50:18.884045 kernel: ACPI: Interpreter enabled Oct 8 19:50:18.884052 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 19:50:18.884060 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 19:50:18.884070 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 19:50:18.884077 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 19:50:18.884085 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 19:50:18.884092 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:50:18.884302 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:50:18.884439 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 19:50:18.884567 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 19:50:18.884577 kernel: PCI host bridge to bus 0000:00 Oct 8 19:50:18.884754 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 19:50:18.884875 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 19:50:18.884991 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 19:50:18.885107 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 8 19:50:18.885222 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 19:50:18.885337 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 8 19:50:18.885458 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:50:18.885645 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 19:50:18.885810 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 8 19:50:18.885939 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 8 19:50:18.886065 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 8 19:50:18.886352 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 8 19:50:18.886486 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 19:50:18.886670 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:50:18.886803 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 8 19:50:18.886931 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 8 19:50:18.887058 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 8 19:50:18.887202 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 8 19:50:18.887331 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 8 19:50:18.887557 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 8 19:50:18.887753 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 8 19:50:18.887955 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 19:50:18.888086 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 8 19:50:18.888214 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 8 19:50:18.888340 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 8 19:50:18.888466 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 8 19:50:18.888673 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 19:50:18.888819 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 19:50:18.888962 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 19:50:18.889167 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 8 19:50:18.889298 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 8 19:50:18.889439 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 19:50:18.889571 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 8 19:50:18.889605 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 19:50:18.889619 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 19:50:18.889629 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 19:50:18.889640 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 19:50:18.889658 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 19:50:18.889669 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 19:50:18.889679 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 19:50:18.889688 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 19:50:18.889696 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 19:50:18.889708 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 19:50:18.889715 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 19:50:18.889723 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 19:50:18.889730 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 19:50:18.889738 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 19:50:18.889745 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 19:50:18.889753 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 19:50:18.889760 kernel: iommu: Default domain type: Translated Oct 8 19:50:18.889768 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 19:50:18.889778 kernel: PCI: Using ACPI for IRQ routing Oct 8 19:50:18.889786 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 19:50:18.889793 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 19:50:18.889801 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 8 19:50:18.889982 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 19:50:18.890148 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 19:50:18.890276 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 19:50:18.890287 kernel: vgaarb: loaded Oct 8 19:50:18.890300 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 19:50:18.890308 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 19:50:18.890316 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 19:50:18.890323 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:50:18.890331 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:50:18.890339 kernel: pnp: PnP ACPI init Oct 8 19:50:18.890491 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 19:50:18.890503 kernel: pnp: PnP ACPI: found 6 devices Oct 8 19:50:18.890511 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 19:50:18.890523 kernel: NET: Registered PF_INET protocol family Oct 8 19:50:18.890530 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:50:18.890538 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:50:18.890560 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:50:18.890568 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:50:18.890575 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:50:18.890583 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:50:18.890658 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:50:18.890671 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:50:18.890678 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:50:18.890686 kernel: NET: Registered PF_XDP protocol family Oct 8 19:50:18.890814 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 19:50:18.890931 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 19:50:18.891047 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 19:50:18.891162 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 8 19:50:18.891276 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 19:50:18.891391 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 8 19:50:18.891405 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:50:18.891413 kernel: Initialise system trusted keyrings Oct 8 19:50:18.891421 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:50:18.891429 kernel: Key type asymmetric registered Oct 8 19:50:18.891436 kernel: Asymmetric key parser 'x509' registered Oct 8 19:50:18.891444 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 19:50:18.891452 kernel: io scheduler mq-deadline registered Oct 8 19:50:18.891460 kernel: io scheduler kyber registered Oct 8 19:50:18.891467 kernel: io scheduler bfq registered Oct 8 19:50:18.891478 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 19:50:18.891486 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 19:50:18.891494 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 19:50:18.891501 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 8 19:50:18.891509 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:50:18.891517 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 19:50:18.891525 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 19:50:18.891533 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 19:50:18.891540 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 19:50:18.891710 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 19:50:18.891724 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 19:50:18.891843 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 19:50:18.891963 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T19:50:18 UTC (1728417018) Oct 8 19:50:18.892081 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 19:50:18.892092 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 19:50:18.892100 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:50:18.892108 kernel: Segment Routing with IPv6 Oct 8 19:50:18.892120 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:50:18.892128 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:50:18.892136 kernel: Key type dns_resolver registered Oct 8 19:50:18.892143 kernel: IPI shorthand broadcast: enabled Oct 8 19:50:18.892151 kernel: sched_clock: Marking stable (662001806, 105082439)->(820815202, -53730957) Oct 8 19:50:18.892159 kernel: registered taskstats version 1 Oct 8 19:50:18.892166 kernel: Loading compiled-in X.509 certificates Oct 8 19:50:18.892174 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 19:50:18.892182 kernel: Key type .fscrypt registered Oct 8 19:50:18.892192 kernel: Key type fscrypt-provisioning registered Oct 8 19:50:18.892200 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:50:18.892208 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:50:18.892215 kernel: ima: No architecture policies found Oct 8 19:50:18.892223 kernel: clk: Disabling unused clocks Oct 8 19:50:18.892231 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 19:50:18.892239 kernel: Write protecting the kernel read-only data: 36864k Oct 8 19:50:18.892247 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 19:50:18.892257 kernel: Run /init as init process Oct 8 19:50:18.892264 kernel: with arguments: Oct 8 19:50:18.892272 kernel: /init Oct 8 19:50:18.892279 kernel: with environment: Oct 8 19:50:18.892287 kernel: HOME=/ Oct 8 19:50:18.892294 kernel: TERM=linux Oct 8 19:50:18.892302 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:50:18.892312 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:50:18.892324 systemd[1]: Detected virtualization kvm. Oct 8 19:50:18.892332 systemd[1]: Detected architecture x86-64. Oct 8 19:50:18.892340 systemd[1]: Running in initrd. Oct 8 19:50:18.892348 systemd[1]: No hostname configured, using default hostname. Oct 8 19:50:18.892356 systemd[1]: Hostname set to . Oct 8 19:50:18.892365 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:50:18.892373 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:50:18.892381 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:50:18.892392 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:50:18.892401 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:50:18.892422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:50:18.892433 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:50:18.892442 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:50:18.892454 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:50:18.892463 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:50:18.892472 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:50:18.892480 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:50:18.892488 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:50:18.892497 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:50:18.892505 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:50:18.892514 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:50:18.892525 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:50:18.892534 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:50:18.892542 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:50:18.892551 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:50:18.892559 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:50:18.892568 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:50:18.892576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:50:18.892585 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:50:18.892612 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:50:18.892628 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:50:18.892640 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:50:18.892655 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:50:18.892664 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:50:18.892672 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:50:18.892680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:50:18.892689 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:50:18.892697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:50:18.892705 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:50:18.892718 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:50:18.892747 systemd-journald[193]: Collecting audit messages is disabled. Oct 8 19:50:18.892768 systemd-journald[193]: Journal started Oct 8 19:50:18.892789 systemd-journald[193]: Runtime Journal (/run/log/journal/91727ccb53a4429d9d004ea8d4624d15) is 6.0M, max 48.4M, 42.3M free. Oct 8 19:50:18.887872 systemd-modules-load[194]: Inserted module 'overlay' Oct 8 19:50:18.927840 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:50:18.933116 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:50:18.936616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:50:18.939395 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:50:18.942390 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:50:18.944541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:50:18.955455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:50:18.958015 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:50:18.965618 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:50:18.965956 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:50:18.969310 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:50:18.971838 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 8 19:50:18.972736 kernel: Bridge firewalling registered Oct 8 19:50:18.974086 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:50:18.976293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:50:18.987317 dracut-cmdline[221]: dracut-dracut-053 Oct 8 19:50:18.990248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:50:18.990876 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:50:19.002823 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:50:19.032868 systemd-resolved[238]: Positive Trust Anchors: Oct 8 19:50:19.032885 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:50:19.032916 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:50:19.035506 systemd-resolved[238]: Defaulting to hostname 'linux'. Oct 8 19:50:19.036643 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:50:19.042835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:50:19.078635 kernel: SCSI subsystem initialized Oct 8 19:50:19.089183 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:50:19.099627 kernel: iscsi: registered transport (tcp) Oct 8 19:50:19.121651 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:50:19.121728 kernel: QLogic iSCSI HBA Driver Oct 8 19:50:19.176793 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:50:19.188745 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:50:19.216327 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:50:19.216378 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:50:19.216392 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:50:19.260649 kernel: raid6: avx2x4 gen() 25662 MB/s Oct 8 19:50:19.277624 kernel: raid6: avx2x2 gen() 30912 MB/s Oct 8 19:50:19.294717 kernel: raid6: avx2x1 gen() 25898 MB/s Oct 8 19:50:19.294740 kernel: raid6: using algorithm avx2x2 gen() 30912 MB/s Oct 8 19:50:19.312722 kernel: raid6: .... xor() 19817 MB/s, rmw enabled Oct 8 19:50:19.312743 kernel: raid6: using avx2x2 recovery algorithm Oct 8 19:50:19.333621 kernel: xor: automatically using best checksumming function avx Oct 8 19:50:19.485644 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:50:19.500671 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:50:19.513805 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:50:19.525966 systemd-udevd[412]: Using default interface naming scheme 'v255'. Oct 8 19:50:19.530711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:50:19.539736 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:50:19.554439 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Oct 8 19:50:19.591835 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:50:19.603726 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:50:19.671069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:50:19.680789 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:50:19.693179 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:50:19.694759 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:50:19.695568 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:50:19.699054 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:50:19.708809 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:50:19.720692 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:50:19.725819 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 8 19:50:19.728209 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:50:19.729659 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 19:50:19.733213 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:50:19.733235 kernel: GPT:9289727 != 19775487 Oct 8 19:50:19.733246 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:50:19.733263 kernel: GPT:9289727 != 19775487 Oct 8 19:50:19.734613 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:50:19.734643 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:50:19.740679 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:50:19.751150 kernel: libata version 3.00 loaded. Oct 8 19:50:19.740753 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:50:19.754158 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 19:50:19.754174 kernel: AES CTR mode by8 optimization enabled Oct 8 19:50:19.746574 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:50:19.747720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:50:19.747780 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:50:19.748976 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:50:19.759839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:50:19.810683 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 19:50:19.811084 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 19:50:19.811671 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 19:50:19.811849 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 19:50:19.819641 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Oct 8 19:50:19.819682 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (459) Oct 8 19:50:19.822648 kernel: scsi host0: ahci Oct 8 19:50:19.825646 kernel: scsi host1: ahci Oct 8 19:50:19.826455 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:50:19.860209 kernel: scsi host2: ahci Oct 8 19:50:19.860431 kernel: scsi host3: ahci Oct 8 19:50:19.860606 kernel: scsi host4: ahci Oct 8 19:50:19.860779 kernel: scsi host5: ahci Oct 8 19:50:19.860934 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 8 19:50:19.860953 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 8 19:50:19.860963 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 8 19:50:19.860973 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 8 19:50:19.860983 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 8 19:50:19.860993 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 8 19:50:19.863738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:50:19.878399 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:50:19.885937 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:50:19.892462 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:50:19.895784 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:50:19.906815 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:50:19.910068 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:50:19.917528 disk-uuid[553]: Primary Header is updated. Oct 8 19:50:19.917528 disk-uuid[553]: Secondary Entries is updated. Oct 8 19:50:19.917528 disk-uuid[553]: Secondary Header is updated. Oct 8 19:50:19.920874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:50:19.925610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:50:19.940723 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:50:20.142632 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 8 19:50:20.142697 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 19:50:20.143662 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 19:50:20.143758 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 19:50:20.144622 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 19:50:20.145625 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 19:50:20.146631 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 19:50:20.147763 kernel: ata3.00: applying bridge limits Oct 8 19:50:20.147777 kernel: ata3.00: configured for UDMA/100 Oct 8 19:50:20.148636 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:50:20.198634 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 19:50:20.198867 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:50:20.212630 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:50:20.946347 disk-uuid[554]: The operation has completed successfully. Oct 8 19:50:20.947711 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:50:20.972986 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:50:20.973130 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:50:20.998795 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:50:21.004794 sh[591]: Success Oct 8 19:50:21.025617 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 19:50:21.058851 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:50:21.072158 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:50:21.083526 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:50:21.140560 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 19:50:21.140661 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:50:21.140673 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:50:21.140684 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:50:21.141283 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:50:21.146719 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:50:21.149313 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:50:21.170202 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:50:21.173892 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:50:21.184405 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:50:21.184472 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:50:21.184484 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:50:21.188621 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:50:21.198502 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:50:21.200206 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:50:21.214032 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:50:21.232830 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:50:21.306690 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:50:21.326917 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:50:21.358115 systemd-networkd[772]: lo: Link UP Oct 8 19:50:21.358128 systemd-networkd[772]: lo: Gained carrier Oct 8 19:50:21.361241 systemd-networkd[772]: Enumeration completed Oct 8 19:50:21.361548 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:50:21.363410 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:50:21.363415 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:50:21.364391 systemd[1]: Reached target network.target - Network. Oct 8 19:50:21.370311 systemd-networkd[772]: eth0: Link UP Oct 8 19:50:21.370321 systemd-networkd[772]: eth0: Gained carrier Oct 8 19:50:21.370328 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:50:21.371676 ignition[689]: Ignition 2.19.0 Oct 8 19:50:21.371685 ignition[689]: Stage: fetch-offline Oct 8 19:50:21.371740 ignition[689]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:50:21.371750 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:50:21.371860 ignition[689]: parsed url from cmdline: "" Oct 8 19:50:21.371864 ignition[689]: no config URL provided Oct 8 19:50:21.371869 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:50:21.371879 ignition[689]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:50:21.371909 ignition[689]: op(1): [started] loading QEMU firmware config module Oct 8 19:50:21.371915 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:50:21.385639 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:50:21.390456 ignition[689]: op(1): [finished] loading QEMU firmware config module Oct 8 19:50:21.429258 ignition[689]: parsing config with SHA512: b5add1cbdfdfece1b03b68169d8bf93550550509614bb24c067d900e960b0d1cf9bba212d9e7955e73a68a264060d5f16add7c906fc5d58c04faa20eddf3e71a Oct 8 19:50:21.433196 unknown[689]: fetched base config from "system" Oct 8 19:50:21.433217 unknown[689]: fetched user config from "qemu" Oct 8 19:50:21.433753 ignition[689]: fetch-offline: fetch-offline passed Oct 8 19:50:21.434778 systemd-resolved[238]: Detected conflict on linux IN A 10.0.0.19 Oct 8 19:50:21.433837 ignition[689]: Ignition finished successfully Oct 8 19:50:21.434787 systemd-resolved[238]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Oct 8 19:50:21.436198 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:50:21.437707 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:50:21.443782 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:50:21.535188 ignition[784]: Ignition 2.19.0 Oct 8 19:50:21.535207 ignition[784]: Stage: kargs Oct 8 19:50:21.535436 ignition[784]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:50:21.535449 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:50:21.536439 ignition[784]: kargs: kargs passed Oct 8 19:50:21.536492 ignition[784]: Ignition finished successfully Oct 8 19:50:21.540659 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:50:21.557804 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:50:21.574230 ignition[793]: Ignition 2.19.0 Oct 8 19:50:21.574242 ignition[793]: Stage: disks Oct 8 19:50:21.574425 ignition[793]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:50:21.574437 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:50:21.577451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:50:21.575226 ignition[793]: disks: disks passed Oct 8 19:50:21.579158 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:50:21.575273 ignition[793]: Ignition finished successfully Oct 8 19:50:21.581039 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:50:21.582919 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:50:21.584996 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:50:21.586789 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:50:21.595766 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:50:21.609104 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:50:21.615823 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:50:21.630706 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:50:21.723633 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 19:50:21.724222 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:50:21.726600 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:50:21.742736 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:50:21.745898 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:50:21.748761 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:50:21.748822 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:50:21.757136 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Oct 8 19:50:21.757165 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:50:21.757180 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:50:21.757194 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:50:21.751008 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:50:21.761612 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:50:21.762715 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:50:21.764657 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:50:21.767791 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:50:21.805932 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:50:21.811652 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:50:21.816987 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:50:21.822695 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:50:21.924191 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:50:21.943704 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:50:21.947551 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:50:21.953652 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:50:21.973759 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:50:21.983305 ignition[925]: INFO : Ignition 2.19.0 Oct 8 19:50:21.983305 ignition[925]: INFO : Stage: mount Oct 8 19:50:21.985623 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:50:21.985623 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:50:21.985623 ignition[925]: INFO : mount: mount passed Oct 8 19:50:21.985623 ignition[925]: INFO : Ignition finished successfully Oct 8 19:50:21.986903 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:50:22.002726 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:50:22.100328 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:50:22.113831 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:50:22.122298 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Oct 8 19:50:22.122333 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:50:22.122345 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:50:22.123790 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:50:22.126630 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:50:22.128440 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:50:22.171973 ignition[955]: INFO : Ignition 2.19.0 Oct 8 19:50:22.171973 ignition[955]: INFO : Stage: files Oct 8 19:50:22.174027 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:50:22.174027 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:50:22.174027 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:50:22.178246 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:50:22.178246 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:50:22.181269 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:50:22.181269 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:50:22.181269 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:50:22.181269 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:50:22.181269 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 19:50:22.179348 unknown[955]: wrote ssh authorized keys file for user: core Oct 8 19:50:22.233036 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:50:22.399030 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:50:22.399030 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:50:22.402801 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:50:22.404490 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:50:22.406883 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:50:22.406883 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:50:22.406883 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:50:22.406883 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:50:22.406883 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:50:22.415617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:50:22.415617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:50:22.419000 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:50:22.421469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:50:22.423896 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:50:22.426008 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 8 19:50:22.784115 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:50:22.872798 systemd-networkd[772]: eth0: Gained IPv6LL Oct 8 19:50:23.498311 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:50:23.498311 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:50:23.503009 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:50:23.505419 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:50:23.505419 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:50:23.505419 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:50:23.510529 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:50:23.510529 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:50:23.515034 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:50:23.515034 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:50:23.538275 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:50:23.545217 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:50:23.547332 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:50:23.547332 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:50:23.550660 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:50:23.552250 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:50:23.554432 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:50:23.556183 ignition[955]: INFO : files: files passed Oct 8 19:50:23.556965 ignition[955]: INFO : Ignition finished successfully Oct 8 19:50:23.560351 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:50:23.567855 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:50:23.569048 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:50:23.576996 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:50:23.577167 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:50:23.583358 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:50:23.588112 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:50:23.588112 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:50:23.591566 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:50:23.593247 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:50:23.595243 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:50:23.603752 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:50:23.636495 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:50:23.636668 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:50:23.637708 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:50:23.640869 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:50:23.641209 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:50:23.642969 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:50:23.686317 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:50:23.700832 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:50:23.713897 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:50:23.714381 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:50:23.716650 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:50:23.716940 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:50:23.717075 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:50:23.721337 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:50:23.721699 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:50:23.722183 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:50:23.722521 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:50:23.722926 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:50:23.723303 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:50:23.723977 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:50:23.724338 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:50:23.724955 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:50:23.725370 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:50:23.725949 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:50:23.726077 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:50:23.746240 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:50:23.750510 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:50:23.751149 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:50:23.751305 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:50:23.753500 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:50:23.753745 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:50:23.758549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:50:23.758731 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:50:23.759202 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:50:23.759510 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:50:23.759667 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:50:23.760188 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:50:23.760530 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:50:23.761049 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:50:23.761150 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:50:23.770765 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:50:23.770905 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:50:23.773856 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:50:23.774099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:50:23.775712 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:50:23.775900 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:50:23.787881 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:50:23.790893 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:50:23.792716 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:50:23.792905 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:50:23.793338 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:50:23.793493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:50:23.803421 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:50:23.803564 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:50:23.807032 ignition[1009]: INFO : Ignition 2.19.0 Oct 8 19:50:23.807032 ignition[1009]: INFO : Stage: umount Oct 8 19:50:23.807032 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:50:23.807032 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:50:23.819428 ignition[1009]: INFO : umount: umount passed Oct 8 19:50:23.819428 ignition[1009]: INFO : Ignition finished successfully Oct 8 19:50:23.810646 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:50:23.810819 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:50:23.820441 systemd[1]: Stopped target network.target - Network. Oct 8 19:50:23.824302 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:50:23.824401 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:50:23.824957 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:50:23.825010 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:50:23.825305 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:50:23.825352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:50:23.825900 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:50:23.825950 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:50:23.826537 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:50:23.834123 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:50:23.843180 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:50:23.843346 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:50:23.843750 systemd-networkd[772]: eth0: DHCPv6 lease lost Oct 8 19:50:23.846561 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:50:23.846754 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:50:23.850079 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:50:23.850183 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:50:23.862701 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:50:23.863022 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:50:23.863095 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:50:23.865614 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:50:23.865674 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:50:23.866174 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:50:23.866219 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:50:23.871467 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:50:23.871541 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:50:23.872069 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:50:23.883324 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:50:23.883496 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:50:23.893480 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:50:23.893795 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:50:23.896547 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:50:23.896638 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:50:23.899150 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:50:23.899197 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:50:23.901783 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:50:23.901883 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:50:23.904434 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:50:23.904490 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:50:23.906629 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:50:23.906684 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:50:23.913749 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:50:23.914381 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:50:23.914442 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:50:23.918089 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 19:50:23.918204 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:50:23.920866 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:50:23.920944 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:50:23.923784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:50:23.923851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:50:23.926275 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:50:23.927020 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:50:23.927180 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:50:24.111753 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:50:24.111904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:50:24.115159 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:50:24.116398 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:50:24.116459 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:50:24.133805 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:50:24.140819 systemd[1]: Switching root. Oct 8 19:50:24.177209 systemd-journald[193]: Journal stopped Oct 8 19:50:25.423216 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 8 19:50:25.423355 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:50:25.423375 kernel: SELinux: policy capability open_perms=1 Oct 8 19:50:25.423387 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:50:25.423399 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:50:25.423414 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:50:25.423427 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:50:25.423443 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:50:25.423458 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:50:25.423486 kernel: audit: type=1403 audit(1728417024.594:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:50:25.423502 systemd[1]: Successfully loaded SELinux policy in 42.487ms. Oct 8 19:50:25.423532 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.109ms. Oct 8 19:50:25.423551 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:50:25.423568 systemd[1]: Detected virtualization kvm. Oct 8 19:50:25.423584 systemd[1]: Detected architecture x86-64. Oct 8 19:50:25.423665 systemd[1]: Detected first boot. Oct 8 19:50:25.423688 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:50:25.423704 zram_generator::config[1052]: No configuration found. Oct 8 19:50:25.423728 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:50:25.423746 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:50:25.423763 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:50:25.423783 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:50:25.423799 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:50:25.423816 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:50:25.423838 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:50:25.423856 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:50:25.423874 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:50:25.423891 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:50:25.423911 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:50:25.423927 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:50:25.423944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:50:25.423961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:50:25.423978 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:50:25.424000 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:50:25.424023 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:50:25.424041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:50:25.424057 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:50:25.424073 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:50:25.424089 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:50:25.424105 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:50:25.424122 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:50:25.424144 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:50:25.424162 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:50:25.424179 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:50:25.424202 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:50:25.424219 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:50:25.424235 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:50:25.424252 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:50:25.424267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:50:25.424286 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:50:25.424309 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:50:25.424328 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:50:25.424344 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:50:25.424360 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:50:25.424375 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:50:25.424391 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:50:25.424406 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:50:25.424428 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:50:25.424445 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:50:25.424478 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:50:25.424496 systemd[1]: Reached target machines.target - Containers. Oct 8 19:50:25.424511 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:50:25.424532 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:50:25.424548 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:50:25.424565 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:50:25.424585 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:50:25.424656 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:50:25.424676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:50:25.424694 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:50:25.424710 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:50:25.424729 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:50:25.424745 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:50:25.424760 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:50:25.424776 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:50:25.424793 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:50:25.424814 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:50:25.424830 kernel: loop: module loaded Oct 8 19:50:25.424846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:50:25.424863 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:50:25.424879 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:50:25.424895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:50:25.424913 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:50:25.424928 systemd[1]: Stopped verity-setup.service. Oct 8 19:50:25.424946 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:50:25.424967 kernel: fuse: init (API version 7.39) Oct 8 19:50:25.424990 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:50:25.425006 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:50:25.425023 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:50:25.425038 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:50:25.425055 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:50:25.425075 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:50:25.425092 kernel: ACPI: bus type drm_connector registered Oct 8 19:50:25.425137 systemd-journald[1122]: Collecting audit messages is disabled. Oct 8 19:50:25.425167 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:50:25.425183 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:50:25.425199 systemd-journald[1122]: Journal started Oct 8 19:50:25.425232 systemd-journald[1122]: Runtime Journal (/run/log/journal/91727ccb53a4429d9d004ea8d4624d15) is 6.0M, max 48.4M, 42.3M free. Oct 8 19:50:25.158530 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:50:25.176659 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:50:25.177298 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:50:25.427523 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:50:25.430717 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:50:25.433303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:50:25.433778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:50:25.435766 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:50:25.437681 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:50:25.437949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:50:25.440100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:50:25.440370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:50:25.442415 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:50:25.442694 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:50:25.444484 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:50:25.444764 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:50:25.446530 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:50:25.448611 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:50:25.450823 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:50:25.475236 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:50:25.484838 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:50:25.488479 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:50:25.489987 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:50:25.490042 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:50:25.492806 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:50:25.496057 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:50:25.499249 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:50:25.500729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:50:25.503876 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:50:25.507096 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:50:25.509141 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:50:25.513185 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:50:25.515003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:50:25.517395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:50:25.526741 systemd-journald[1122]: Time spent on flushing to /var/log/journal/91727ccb53a4429d9d004ea8d4624d15 is 35.328ms for 952 entries. Oct 8 19:50:25.526741 systemd-journald[1122]: System Journal (/var/log/journal/91727ccb53a4429d9d004ea8d4624d15) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:50:25.574302 systemd-journald[1122]: Received client request to flush runtime journal. Oct 8 19:50:25.545824 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:50:25.549307 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:50:25.555658 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:50:25.558425 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:50:25.560079 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:50:25.574131 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:50:25.578366 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:50:25.584832 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:50:25.588714 kernel: loop0: detected capacity change from 0 to 140768 Oct 8 19:50:25.596200 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:50:25.619901 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:50:25.624770 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:50:25.634963 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:50:25.628319 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:50:25.669035 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Oct 8 19:50:25.669066 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Oct 8 19:50:25.678046 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:50:25.679796 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:50:25.690872 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:50:25.701633 kernel: loop1: detected capacity change from 0 to 142488 Oct 8 19:50:25.728234 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:50:25.729225 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:50:25.743556 kernel: loop2: detected capacity change from 0 to 211296 Oct 8 19:50:25.757301 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:50:25.772199 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:50:25.786369 kernel: loop3: detected capacity change from 0 to 140768 Oct 8 19:50:25.846667 kernel: loop4: detected capacity change from 0 to 142488 Oct 8 19:50:25.866286 kernel: loop5: detected capacity change from 0 to 211296 Oct 8 19:50:25.867436 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Oct 8 19:50:25.867484 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Oct 8 19:50:25.875331 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:50:25.878516 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:50:25.879812 (sd-merge)[1192]: Merged extensions into '/usr'. Oct 8 19:50:25.884494 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:50:25.884758 systemd[1]: Reloading... Oct 8 19:50:25.968645 zram_generator::config[1220]: No configuration found. Oct 8 19:50:26.134184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:50:26.198077 systemd[1]: Reloading finished in 312 ms. Oct 8 19:50:26.209750 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:50:26.235118 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:50:26.236838 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:50:26.252882 systemd[1]: Starting ensure-sysext.service... Oct 8 19:50:26.255300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:50:26.270213 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:50:26.270399 systemd[1]: Reloading... Oct 8 19:50:26.314307 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:50:26.314919 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:50:26.316317 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:50:26.317199 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Oct 8 19:50:26.317358 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Oct 8 19:50:26.324874 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:50:26.324974 systemd-tmpfiles[1258]: Skipping /boot Oct 8 19:50:26.350537 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:50:26.350791 systemd-tmpfiles[1258]: Skipping /boot Oct 8 19:50:26.365635 zram_generator::config[1286]: No configuration found. Oct 8 19:50:26.492558 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:50:26.552202 systemd[1]: Reloading finished in 281 ms. Oct 8 19:50:26.570965 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:50:26.584540 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:50:26.597065 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:50:26.600192 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:50:26.602776 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:50:26.607537 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:50:26.610357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:50:26.618469 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:50:26.623222 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:50:26.623475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:50:26.625526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:50:26.632690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:50:26.638926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:50:26.641887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:50:26.647981 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:50:26.649555 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:50:26.650780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:50:26.651048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:50:26.653121 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:50:26.655189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:50:26.655376 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:50:26.657834 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:50:26.658048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:50:26.668817 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Oct 8 19:50:26.670570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:50:26.670959 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:50:26.674023 augenrules[1352]: No rules Oct 8 19:50:26.678952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:50:26.681964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:50:26.687674 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:50:26.688862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:50:26.693312 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:50:26.694735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:50:26.696151 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:50:26.699746 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:50:26.701999 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:50:26.704101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:50:26.704363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:50:26.706315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:50:26.706579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:50:26.708285 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:50:26.710763 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:50:26.711009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:50:26.713621 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:50:26.731557 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:50:26.751139 systemd[1]: Finished ensure-sysext.service. Oct 8 19:50:26.756251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:50:26.756425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:50:26.761627 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1384) Oct 8 19:50:26.767877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:50:26.769295 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1370) Oct 8 19:50:26.770600 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:50:26.773755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:50:26.779671 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1370) Oct 8 19:50:26.778718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:50:26.779923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:50:26.832845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:50:26.837779 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:50:26.844913 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:50:26.844956 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:50:26.845687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:50:26.845918 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:50:26.847480 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:50:26.847676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:50:26.851074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:50:26.851260 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:50:26.865474 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 19:50:26.865558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:50:26.932304 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:50:26.932524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:50:26.938430 systemd-resolved[1327]: Positive Trust Anchors: Oct 8 19:50:26.938891 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:50:26.939011 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:50:26.939649 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:50:26.943926 systemd-resolved[1327]: Defaulting to hostname 'linux'. Oct 8 19:50:26.947979 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:50:26.949324 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:50:26.976059 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:50:26.983972 systemd-networkd[1399]: lo: Link UP Oct 8 19:50:26.983991 systemd-networkd[1399]: lo: Gained carrier Oct 8 19:50:26.986760 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:50:26.988292 systemd-networkd[1399]: Enumeration completed Oct 8 19:50:26.988482 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:50:26.989170 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:50:26.990447 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:50:26.990610 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 8 19:50:26.991122 systemd[1]: Reached target network.target - Network. Oct 8 19:50:26.991972 systemd-networkd[1399]: eth0: Link UP Oct 8 19:50:26.992030 systemd-networkd[1399]: eth0: Gained carrier Oct 8 19:50:26.992195 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:50:26.994128 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:50:26.997280 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:50:27.009694 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:50:27.012268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:50:27.013635 kernel: ACPI: button: Power Button [PWRF] Oct 8 19:50:27.014504 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:50:28.117797 systemd-resolved[1327]: Clock change detected. Flushing caches. Oct 8 19:50:28.118043 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:50:28.118133 systemd-timesyncd[1400]: Initial clock synchronization to Tue 2024-10-08 19:50:28.117719 UTC. Oct 8 19:50:28.118995 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:50:28.127480 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 19:50:28.127889 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 19:50:28.128091 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 8 19:50:28.128106 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 19:50:28.239729 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:50:28.241150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:50:28.250995 kernel: kvm_amd: TSC scaling supported Oct 8 19:50:28.251233 kernel: kvm_amd: Nested Virtualization enabled Oct 8 19:50:28.251251 kernel: kvm_amd: Nested Paging enabled Oct 8 19:50:28.251266 kernel: kvm_amd: LBR virtualization supported Oct 8 19:50:28.252042 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 8 19:50:28.252077 kernel: kvm_amd: Virtual GIF supported Oct 8 19:50:28.278729 kernel: EDAC MC: Ver: 3.0.0 Oct 8 19:50:28.310112 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:50:28.373398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:50:28.391011 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:50:28.402780 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:50:28.444334 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:50:28.446032 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:50:28.447260 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:50:28.448562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:50:28.449945 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:50:28.451584 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:50:28.452966 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:50:28.454258 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:50:28.455507 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:50:28.455551 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:50:28.456451 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:50:28.458084 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:50:28.461328 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:50:28.473198 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:50:28.476293 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:50:28.478296 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:50:28.479855 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:50:28.481114 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:50:28.482233 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:50:28.482263 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:50:28.483506 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:50:28.485994 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:50:28.491109 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:50:28.495935 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:50:28.497390 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:50:28.500413 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:50:28.500140 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:50:28.501023 jq[1434]: false Oct 8 19:50:28.505817 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:50:28.511141 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:50:28.515907 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:50:28.524271 dbus-daemon[1433]: [system] SELinux support is enabled Oct 8 19:50:28.524818 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:50:28.526587 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:50:28.527138 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:50:28.528987 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:50:28.532336 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:50:28.534483 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:50:28.539233 extend-filesystems[1435]: Found loop3 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found loop4 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found loop5 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found sr0 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found vda Oct 8 19:50:28.539233 extend-filesystems[1435]: Found vda1 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found vda2 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found vda3 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found usr Oct 8 19:50:28.539233 extend-filesystems[1435]: Found vda4 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found vda6 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found vda7 Oct 8 19:50:28.539233 extend-filesystems[1435]: Found vda9 Oct 8 19:50:28.539233 extend-filesystems[1435]: Checking size of /dev/vda9 Oct 8 19:50:28.539580 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:50:28.572984 extend-filesystems[1435]: Resized partition /dev/vda9 Oct 8 19:50:28.573923 jq[1448]: true Oct 8 19:50:28.540677 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:50:28.540932 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:50:28.541270 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:50:28.541476 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:50:28.545390 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:50:28.545659 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:50:28.563395 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:50:28.563433 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:50:28.563946 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:50:28.563967 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:50:28.575159 jq[1458]: true Oct 8 19:50:28.569529 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:50:28.575578 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:50:28.587722 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:50:28.589924 update_engine[1444]: I20241008 19:50:28.589834 1444 main.cc:92] Flatcar Update Engine starting Oct 8 19:50:28.592771 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Oct 8 19:50:28.592980 update_engine[1444]: I20241008 19:50:28.592947 1444 update_check_scheduler.cc:74] Next update check in 2m42s Oct 8 19:50:28.594815 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:50:28.651931 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:50:28.658761 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:50:28.669040 tar[1453]: linux-amd64/helm Oct 8 19:50:28.741102 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 19:50:28.743839 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:50:28.743839 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:50:28.743839 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:50:28.762705 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Oct 8 19:50:28.763735 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:50:28.743864 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 19:50:28.744370 systemd-logind[1442]: New seat seat0. Oct 8 19:50:28.745944 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:50:28.746589 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:50:28.756143 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:50:28.760835 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:50:28.766246 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:50:28.769126 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:50:28.870671 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:50:28.912371 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:50:28.914271 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:50:28.926751 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:50:28.933829 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:39926.service - OpenSSH per-connection server daemon (10.0.0.1:39926). Oct 8 19:50:28.957313 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:50:28.957628 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:50:28.965850 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:50:29.017512 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:50:29.026928 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:50:29.034991 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:50:29.036298 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:50:29.065126 sshd[1506]: Accepted publickey for core from 10.0.0.1 port 39926 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:50:29.068399 sshd[1506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:50:29.079400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:50:29.086994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:50:29.091248 systemd-logind[1442]: New session 1 of user core. Oct 8 19:50:29.107050 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:50:29.141106 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:50:29.157739 (systemd)[1522]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:50:29.164058 containerd[1464]: time="2024-10-08T19:50:29.163908849Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:50:29.226216 containerd[1464]: time="2024-10-08T19:50:29.226113205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:50:29.228465 containerd[1464]: time="2024-10-08T19:50:29.228420562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:50:29.228465 containerd[1464]: time="2024-10-08T19:50:29.228453814Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:50:29.228534 containerd[1464]: time="2024-10-08T19:50:29.228469333Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:50:29.228834 containerd[1464]: time="2024-10-08T19:50:29.228746272Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:50:29.228834 containerd[1464]: time="2024-10-08T19:50:29.228773263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:50:29.228900 containerd[1464]: time="2024-10-08T19:50:29.228855908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:50:29.228900 containerd[1464]: time="2024-10-08T19:50:29.228869333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:50:29.229138 containerd[1464]: time="2024-10-08T19:50:29.229107490Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:50:29.229138 containerd[1464]: time="2024-10-08T19:50:29.229128409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:50:29.229186 containerd[1464]: time="2024-10-08T19:50:29.229141203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:50:29.229186 containerd[1464]: time="2024-10-08T19:50:29.229151923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:50:29.229324 containerd[1464]: time="2024-10-08T19:50:29.229269664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:50:29.229594 containerd[1464]: time="2024-10-08T19:50:29.229573333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:50:29.229769 containerd[1464]: time="2024-10-08T19:50:29.229745536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:50:29.229769 containerd[1464]: time="2024-10-08T19:50:29.229764963Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:50:29.229962 containerd[1464]: time="2024-10-08T19:50:29.229900838Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:50:29.229992 containerd[1464]: time="2024-10-08T19:50:29.229976660Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:50:29.237104 containerd[1464]: time="2024-10-08T19:50:29.237053752Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:50:29.237145 containerd[1464]: time="2024-10-08T19:50:29.237120898Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:50:29.237145 containerd[1464]: time="2024-10-08T19:50:29.237137088Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:50:29.237183 containerd[1464]: time="2024-10-08T19:50:29.237161213Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:50:29.237183 containerd[1464]: time="2024-10-08T19:50:29.237176272Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:50:29.237455 containerd[1464]: time="2024-10-08T19:50:29.237336142Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:50:29.237702 containerd[1464]: time="2024-10-08T19:50:29.237654358Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:50:29.237852 containerd[1464]: time="2024-10-08T19:50:29.237813988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:50:29.237852 containerd[1464]: time="2024-10-08T19:50:29.237835227Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:50:29.237852 containerd[1464]: time="2024-10-08T19:50:29.237851628Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:50:29.237917 containerd[1464]: time="2024-10-08T19:50:29.237868690Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:50:29.237917 containerd[1464]: time="2024-10-08T19:50:29.237881845Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:50:29.237917 containerd[1464]: time="2024-10-08T19:50:29.237899799Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:50:29.237917 containerd[1464]: time="2024-10-08T19:50:29.237916680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:50:29.237996 containerd[1464]: time="2024-10-08T19:50:29.237931989Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:50:29.237996 containerd[1464]: time="2024-10-08T19:50:29.237945033Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:50:29.237996 containerd[1464]: time="2024-10-08T19:50:29.237959891Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:50:29.237996 containerd[1464]: time="2024-10-08T19:50:29.237970521Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:50:29.237996 containerd[1464]: time="2024-10-08T19:50:29.237993785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238086 containerd[1464]: time="2024-10-08T19:50:29.238009865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238086 containerd[1464]: time="2024-10-08T19:50:29.238022168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238086 containerd[1464]: time="2024-10-08T19:50:29.238033860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238086 containerd[1464]: time="2024-10-08T19:50:29.238049529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238086 containerd[1464]: time="2024-10-08T19:50:29.238063155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238086 containerd[1464]: time="2024-10-08T19:50:29.238076801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238207 containerd[1464]: time="2024-10-08T19:50:29.238089194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238207 containerd[1464]: time="2024-10-08T19:50:29.238103951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238207 containerd[1464]: time="2024-10-08T19:50:29.238120603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238207 containerd[1464]: time="2024-10-08T19:50:29.238132765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238207 containerd[1464]: time="2024-10-08T19:50:29.238147964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238207 containerd[1464]: time="2024-10-08T19:50:29.238160688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238207 containerd[1464]: time="2024-10-08T19:50:29.238184302Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:50:29.238207 containerd[1464]: time="2024-10-08T19:50:29.238210050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238346 containerd[1464]: time="2024-10-08T19:50:29.238223155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.238346 containerd[1464]: time="2024-10-08T19:50:29.238257570Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:50:29.240089 containerd[1464]: time="2024-10-08T19:50:29.240065730Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:50:29.240147 containerd[1464]: time="2024-10-08T19:50:29.240094745Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:50:29.240147 containerd[1464]: time="2024-10-08T19:50:29.240107559Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:50:29.240147 containerd[1464]: time="2024-10-08T19:50:29.240123038Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:50:29.240147 containerd[1464]: time="2024-10-08T19:50:29.240134349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.240220 containerd[1464]: time="2024-10-08T19:50:29.240163865Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:50:29.240220 containerd[1464]: time="2024-10-08T19:50:29.240185245Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:50:29.240220 containerd[1464]: time="2024-10-08T19:50:29.240199341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:50:29.240682 containerd[1464]: time="2024-10-08T19:50:29.240536964Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:50:29.240911 containerd[1464]: time="2024-10-08T19:50:29.240831086Z" level=info msg="Connect containerd service" Oct 8 19:50:29.240911 containerd[1464]: time="2024-10-08T19:50:29.240890437Z" level=info msg="using legacy CRI server" Oct 8 19:50:29.240966 containerd[1464]: time="2024-10-08T19:50:29.240898793Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:50:29.241307 containerd[1464]: time="2024-10-08T19:50:29.241279487Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243267866Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243443926Z" level=info msg="Start subscribing containerd event" Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243539806Z" level=info msg="Start recovering state" Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243623202Z" level=info msg="Start event monitor" Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243643741Z" level=info msg="Start snapshots syncer" Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243657597Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243669389Z" level=info msg="Start streaming server" Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243790365Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243847533Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:50:29.243966 containerd[1464]: time="2024-10-08T19:50:29.243918396Z" level=info msg="containerd successfully booted in 0.082375s" Oct 8 19:50:29.244053 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:50:29.343906 systemd[1522]: Queued start job for default target default.target. Oct 8 19:50:29.385363 systemd[1522]: Created slice app.slice - User Application Slice. Oct 8 19:50:29.385401 systemd[1522]: Reached target paths.target - Paths. Oct 8 19:50:29.385421 systemd[1522]: Reached target timers.target - Timers. Oct 8 19:50:29.387536 systemd[1522]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:50:29.401852 systemd[1522]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:50:29.402005 systemd[1522]: Reached target sockets.target - Sockets. Oct 8 19:50:29.402029 systemd[1522]: Reached target basic.target - Basic System. Oct 8 19:50:29.402072 systemd[1522]: Reached target default.target - Main User Target. Oct 8 19:50:29.402113 systemd[1522]: Startup finished in 233ms. Oct 8 19:50:29.402646 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:50:29.405408 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:50:29.446635 tar[1453]: linux-amd64/LICENSE Oct 8 19:50:29.446635 tar[1453]: linux-amd64/README.md Oct 8 19:50:29.460734 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:50:29.464907 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:39940.service - OpenSSH per-connection server daemon (10.0.0.1:39940). Oct 8 19:50:29.499081 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 39940 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:50:29.500796 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:50:29.505132 systemd-logind[1442]: New session 2 of user core. Oct 8 19:50:29.514840 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:50:29.570246 sshd[1540]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:29.577402 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:39940.service: Deactivated successfully. Oct 8 19:50:29.579225 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:50:29.580559 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:50:29.581845 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:39946.service - OpenSSH per-connection server daemon (10.0.0.1:39946). Oct 8 19:50:29.583916 systemd-logind[1442]: Removed session 2. Oct 8 19:50:29.616810 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 39946 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:50:29.618363 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:50:29.622460 systemd-logind[1442]: New session 3 of user core. Oct 8 19:50:29.628863 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:50:29.684531 sshd[1547]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:29.688059 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:39946.service: Deactivated successfully. Oct 8 19:50:29.690039 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:50:29.690621 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:50:29.691550 systemd-logind[1442]: Removed session 3. Oct 8 19:50:30.054062 systemd-networkd[1399]: eth0: Gained IPv6LL Oct 8 19:50:30.058004 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:50:30.060038 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:50:30.075916 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:50:30.078851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:50:30.081111 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:50:30.102395 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:50:30.102650 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:50:30.104275 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:50:30.107384 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:50:30.731353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:50:30.733320 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:50:30.734777 systemd[1]: Startup finished in 794ms (kernel) + 5.896s (initrd) + 5.078s (userspace) = 11.770s. Oct 8 19:50:30.755632 (kubelet)[1575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:50:31.448407 kubelet[1575]: E1008 19:50:31.448296 1575 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:50:31.455960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:50:31.456212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:50:31.456665 systemd[1]: kubelet.service: Consumed 1.238s CPU time. Oct 8 19:50:39.696624 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:45642.service - OpenSSH per-connection server daemon (10.0.0.1:45642). Oct 8 19:50:39.733291 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 45642 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:50:39.735495 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:50:39.739930 systemd-logind[1442]: New session 4 of user core. Oct 8 19:50:39.749911 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:50:39.806490 sshd[1589]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:39.818856 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:45642.service: Deactivated successfully. Oct 8 19:50:39.820852 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:50:39.822447 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:50:39.823800 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:45650.service - OpenSSH per-connection server daemon (10.0.0.1:45650). Oct 8 19:50:39.824586 systemd-logind[1442]: Removed session 4. Oct 8 19:50:39.872027 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 45650 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:50:39.873786 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:50:39.878607 systemd-logind[1442]: New session 5 of user core. Oct 8 19:50:39.891929 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:50:39.943296 sshd[1596]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:39.962265 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:45650.service: Deactivated successfully. Oct 8 19:50:39.964047 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:50:39.965451 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:50:39.966792 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:45662.service - OpenSSH per-connection server daemon (10.0.0.1:45662). Oct 8 19:50:39.968104 systemd-logind[1442]: Removed session 5. Oct 8 19:50:40.020963 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 45662 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:50:40.022880 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:50:40.027516 systemd-logind[1442]: New session 6 of user core. Oct 8 19:50:40.036953 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:50:40.094115 sshd[1603]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:40.106557 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:45662.service: Deactivated successfully. Oct 8 19:50:40.108379 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:50:40.110059 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:50:40.122007 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:45676.service - OpenSSH per-connection server daemon (10.0.0.1:45676). Oct 8 19:50:40.123107 systemd-logind[1442]: Removed session 6. Oct 8 19:50:40.151898 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 45676 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:50:40.153656 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:50:40.158088 systemd-logind[1442]: New session 7 of user core. Oct 8 19:50:40.168848 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:50:40.230670 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:50:40.231054 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:50:40.253768 sudo[1614]: pam_unix(sudo:session): session closed for user root Oct 8 19:50:40.256327 sshd[1610]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:40.274447 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:45676.service: Deactivated successfully. Oct 8 19:50:40.276133 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:50:40.277659 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:50:40.287939 systemd[1]: Started sshd@7-10.0.0.19:22-10.0.0.1:45688.service - OpenSSH per-connection server daemon (10.0.0.1:45688). Oct 8 19:50:40.288813 systemd-logind[1442]: Removed session 7. Oct 8 19:50:40.318491 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 45688 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:50:40.320220 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:50:40.324312 systemd-logind[1442]: New session 8 of user core. Oct 8 19:50:40.333834 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:50:40.388462 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:50:40.388842 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:50:40.392856 sudo[1623]: pam_unix(sudo:session): session closed for user root Oct 8 19:50:40.400257 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:50:40.400763 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:50:40.421069 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:50:40.423118 auditctl[1626]: No rules Oct 8 19:50:40.425091 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:50:40.425454 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:50:40.428130 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:50:40.462135 augenrules[1644]: No rules Oct 8 19:50:40.464332 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:50:40.465855 sudo[1622]: pam_unix(sudo:session): session closed for user root Oct 8 19:50:40.468369 sshd[1619]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:40.484973 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:45688.service: Deactivated successfully. Oct 8 19:50:40.487287 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:50:40.489131 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:50:40.508254 systemd[1]: Started sshd@8-10.0.0.19:22-10.0.0.1:45702.service - OpenSSH per-connection server daemon (10.0.0.1:45702). Oct 8 19:50:40.509353 systemd-logind[1442]: Removed session 8. Oct 8 19:50:40.540922 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 45702 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:50:40.542818 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:50:40.547689 systemd-logind[1442]: New session 9 of user core. Oct 8 19:50:40.557848 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:50:40.614434 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:50:40.614919 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:50:40.930252 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:50:40.930255 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:50:41.230344 dockerd[1673]: time="2024-10-08T19:50:41.230148906Z" level=info msg="Starting up" Oct 8 19:50:41.706470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:50:41.717880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:50:41.927121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:50:41.935728 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:50:42.538790 kubelet[1705]: E1008 19:50:42.538441 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:50:42.547267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:50:42.547513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:50:43.103056 dockerd[1673]: time="2024-10-08T19:50:43.102981947Z" level=info msg="Loading containers: start." Oct 8 19:50:43.269726 kernel: Initializing XFRM netlink socket Oct 8 19:50:43.358350 systemd-networkd[1399]: docker0: Link UP Oct 8 19:50:43.380360 dockerd[1673]: time="2024-10-08T19:50:43.380312691Z" level=info msg="Loading containers: done." Oct 8 19:50:43.397427 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1228222892-merged.mount: Deactivated successfully. Oct 8 19:50:43.399604 dockerd[1673]: time="2024-10-08T19:50:43.399544798Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:50:43.399721 dockerd[1673]: time="2024-10-08T19:50:43.399653522Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:50:43.399832 dockerd[1673]: time="2024-10-08T19:50:43.399790839Z" level=info msg="Daemon has completed initialization" Oct 8 19:50:43.489422 dockerd[1673]: time="2024-10-08T19:50:43.489296194Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:50:43.489820 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:50:44.890396 containerd[1464]: time="2024-10-08T19:50:44.890313461Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:50:45.574513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150644736.mount: Deactivated successfully. Oct 8 19:50:50.811406 containerd[1464]: time="2024-10-08T19:50:50.811313841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:50.839056 containerd[1464]: time="2024-10-08T19:50:50.838932546Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 8 19:50:50.887624 containerd[1464]: time="2024-10-08T19:50:50.887539110Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:50.912527 containerd[1464]: time="2024-10-08T19:50:50.912456980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:50.913790 containerd[1464]: time="2024-10-08T19:50:50.913754343Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 6.023375559s" Oct 8 19:50:50.913864 containerd[1464]: time="2024-10-08T19:50:50.913805589Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 8 19:50:50.936864 containerd[1464]: time="2024-10-08T19:50:50.936810652Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:50:52.798151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:50:52.825072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:50:52.982248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:50:52.988464 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:50:53.498372 containerd[1464]: time="2024-10-08T19:50:53.498279510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:53.499903 containerd[1464]: time="2024-10-08T19:50:53.499838654Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 8 19:50:53.501595 containerd[1464]: time="2024-10-08T19:50:53.501564731Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:53.504512 kubelet[1918]: E1008 19:50:53.504412 1918 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:50:53.505593 containerd[1464]: time="2024-10-08T19:50:53.505542421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:53.506831 containerd[1464]: time="2024-10-08T19:50:53.506778559Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.569925077s" Oct 8 19:50:53.506964 containerd[1464]: time="2024-10-08T19:50:53.506831979Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 8 19:50:53.509960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:50:53.510227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:50:53.536174 containerd[1464]: time="2024-10-08T19:50:53.536129963Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:50:55.703905 containerd[1464]: time="2024-10-08T19:50:55.703811961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:55.704800 containerd[1464]: time="2024-10-08T19:50:55.704766451Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 8 19:50:55.706446 containerd[1464]: time="2024-10-08T19:50:55.706393823Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:55.709703 containerd[1464]: time="2024-10-08T19:50:55.709655880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:55.710597 containerd[1464]: time="2024-10-08T19:50:55.710554886Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 2.174385379s" Oct 8 19:50:55.710597 containerd[1464]: time="2024-10-08T19:50:55.710588299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 8 19:50:55.765813 containerd[1464]: time="2024-10-08T19:50:55.765764214Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:50:57.617838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430740846.mount: Deactivated successfully. Oct 8 19:50:58.643735 containerd[1464]: time="2024-10-08T19:50:58.642715013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:58.644552 containerd[1464]: time="2024-10-08T19:50:58.644509438Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 8 19:50:58.645876 containerd[1464]: time="2024-10-08T19:50:58.645811881Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:58.648161 containerd[1464]: time="2024-10-08T19:50:58.648093750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:58.648863 containerd[1464]: time="2024-10-08T19:50:58.648794414Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 2.882990837s" Oct 8 19:50:58.648863 containerd[1464]: time="2024-10-08T19:50:58.648857272Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 8 19:50:58.671815 containerd[1464]: time="2024-10-08T19:50:58.671761666Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:50:59.572014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362163031.mount: Deactivated successfully. Oct 8 19:51:00.464118 containerd[1464]: time="2024-10-08T19:51:00.464043795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:00.464852 containerd[1464]: time="2024-10-08T19:51:00.464778943Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 8 19:51:00.466182 containerd[1464]: time="2024-10-08T19:51:00.466136279Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:00.469488 containerd[1464]: time="2024-10-08T19:51:00.469442800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:00.470670 containerd[1464]: time="2024-10-08T19:51:00.470628083Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.798819088s" Oct 8 19:51:00.470670 containerd[1464]: time="2024-10-08T19:51:00.470664551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 19:51:00.492844 containerd[1464]: time="2024-10-08T19:51:00.492785335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:51:01.009556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618093884.mount: Deactivated successfully. Oct 8 19:51:01.015487 containerd[1464]: time="2024-10-08T19:51:01.015444783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:01.016205 containerd[1464]: time="2024-10-08T19:51:01.016149180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 8 19:51:01.017313 containerd[1464]: time="2024-10-08T19:51:01.017275840Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:01.019683 containerd[1464]: time="2024-10-08T19:51:01.019653308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:01.020421 containerd[1464]: time="2024-10-08T19:51:01.020387373Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 527.552183ms" Oct 8 19:51:01.020463 containerd[1464]: time="2024-10-08T19:51:01.020424785Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 19:51:01.040681 containerd[1464]: time="2024-10-08T19:51:01.040640729Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:51:02.768904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351492382.mount: Deactivated successfully. Oct 8 19:51:03.591142 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:51:03.607270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:04.046528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:04.048355 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:51:04.240288 kubelet[2072]: E1008 19:51:04.240222 2072 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:51:04.246935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:51:04.247183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:51:05.879542 containerd[1464]: time="2024-10-08T19:51:05.879478481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:05.880282 containerd[1464]: time="2024-10-08T19:51:05.880206936Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 8 19:51:05.881404 containerd[1464]: time="2024-10-08T19:51:05.881360795Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:05.884401 containerd[1464]: time="2024-10-08T19:51:05.884369104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:05.885793 containerd[1464]: time="2024-10-08T19:51:05.885745580Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.844908031s" Oct 8 19:51:05.885793 containerd[1464]: time="2024-10-08T19:51:05.885786628Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 8 19:51:09.070490 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:09.088010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:09.105534 systemd[1]: Reloading requested from client PID 2162 ('systemctl') (unit session-9.scope)... Oct 8 19:51:09.105555 systemd[1]: Reloading... Oct 8 19:51:09.183736 zram_generator::config[2204]: No configuration found. Oct 8 19:51:09.426680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:51:09.509954 systemd[1]: Reloading finished in 403 ms. Oct 8 19:51:09.571418 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:51:09.571583 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:51:09.571973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:09.574035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:09.846441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:09.858046 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:51:09.927470 kubelet[2250]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:51:09.927470 kubelet[2250]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:51:09.927470 kubelet[2250]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:51:09.928010 kubelet[2250]: I1008 19:51:09.927515 2250 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:51:10.063968 kubelet[2250]: I1008 19:51:10.063894 2250 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:51:10.063968 kubelet[2250]: I1008 19:51:10.063937 2250 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:51:10.064337 kubelet[2250]: I1008 19:51:10.064312 2250 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:51:10.106265 kubelet[2250]: E1008 19:51:10.106091 2250 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:10.109955 kubelet[2250]: I1008 19:51:10.109905 2250 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:51:10.136843 kubelet[2250]: I1008 19:51:10.136788 2250 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:51:10.139971 kubelet[2250]: I1008 19:51:10.139930 2250 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:51:10.140276 kubelet[2250]: I1008 19:51:10.140169 2250 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:51:10.140276 kubelet[2250]: I1008 19:51:10.140216 2250 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:51:10.140276 kubelet[2250]: I1008 19:51:10.140228 2250 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:51:10.140493 kubelet[2250]: I1008 19:51:10.140400 2250 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:51:10.140553 kubelet[2250]: I1008 19:51:10.140534 2250 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:51:10.140602 kubelet[2250]: I1008 19:51:10.140558 2250 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:51:10.140602 kubelet[2250]: I1008 19:51:10.140596 2250 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:51:10.140666 kubelet[2250]: I1008 19:51:10.140611 2250 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:51:10.141889 kubelet[2250]: W1008 19:51:10.141678 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:10.141889 kubelet[2250]: E1008 19:51:10.141769 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:10.142824 kubelet[2250]: I1008 19:51:10.142445 2250 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:51:10.142824 kubelet[2250]: W1008 19:51:10.142429 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:10.142824 kubelet[2250]: E1008 19:51:10.142507 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:10.145725 kubelet[2250]: I1008 19:51:10.145681 2250 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:51:10.146916 kubelet[2250]: W1008 19:51:10.146876 2250 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:51:10.147798 kubelet[2250]: I1008 19:51:10.147556 2250 server.go:1256] "Started kubelet" Oct 8 19:51:10.148521 kubelet[2250]: I1008 19:51:10.148027 2250 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:51:10.148521 kubelet[2250]: I1008 19:51:10.148277 2250 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:51:10.148521 kubelet[2250]: I1008 19:51:10.148333 2250 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:51:10.148920 kubelet[2250]: I1008 19:51:10.148892 2250 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:51:10.149421 kubelet[2250]: I1008 19:51:10.149393 2250 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:51:10.155270 kubelet[2250]: E1008 19:51:10.155172 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:10.155403 kubelet[2250]: I1008 19:51:10.155315 2250 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:51:10.156722 kubelet[2250]: I1008 19:51:10.155756 2250 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:51:10.156722 kubelet[2250]: I1008 19:51:10.156056 2250 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:51:10.156722 kubelet[2250]: W1008 19:51:10.156636 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:10.156829 kubelet[2250]: E1008 19:51:10.156689 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:10.158569 kubelet[2250]: I1008 19:51:10.158523 2250 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:51:10.161725 kubelet[2250]: E1008 19:51:10.160743 2250 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:51:10.162767 kubelet[2250]: E1008 19:51:10.162742 2250 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc922af1dd7c18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:51:10.147533848 +0000 UTC m=+0.284304219,LastTimestamp:2024-10-08 19:51:10.147533848 +0000 UTC m=+0.284304219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:51:10.163959 kubelet[2250]: I1008 19:51:10.163930 2250 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:51:10.163959 kubelet[2250]: I1008 19:51:10.163948 2250 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:51:10.186799 kubelet[2250]: E1008 19:51:10.186750 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Oct 8 19:51:10.192274 kubelet[2250]: I1008 19:51:10.192028 2250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:51:10.194017 kubelet[2250]: I1008 19:51:10.193966 2250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:51:10.194017 kubelet[2250]: I1008 19:51:10.194005 2250 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:51:10.194172 kubelet[2250]: I1008 19:51:10.194033 2250 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:51:10.194172 kubelet[2250]: E1008 19:51:10.194092 2250 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:51:10.194928 kubelet[2250]: W1008 19:51:10.194617 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:10.194928 kubelet[2250]: E1008 19:51:10.194667 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:10.197689 kubelet[2250]: I1008 19:51:10.197672 2250 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:51:10.197689 kubelet[2250]: I1008 19:51:10.197686 2250 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:51:10.197806 kubelet[2250]: I1008 19:51:10.197766 2250 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:51:10.257008 kubelet[2250]: I1008 19:51:10.256963 2250 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:51:10.257462 kubelet[2250]: E1008 19:51:10.257441 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Oct 8 19:51:10.294883 kubelet[2250]: E1008 19:51:10.294788 2250 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:51:10.387802 kubelet[2250]: E1008 19:51:10.387639 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Oct 8 19:51:10.459179 kubelet[2250]: I1008 19:51:10.459142 2250 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:51:10.459520 kubelet[2250]: E1008 19:51:10.459504 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Oct 8 19:51:10.495734 kubelet[2250]: E1008 19:51:10.495667 2250 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:51:10.788816 kubelet[2250]: E1008 19:51:10.788551 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Oct 8 19:51:10.861129 kubelet[2250]: I1008 19:51:10.861073 2250 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:51:10.861485 kubelet[2250]: E1008 19:51:10.861457 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Oct 8 19:51:10.896801 kubelet[2250]: E1008 19:51:10.896668 2250 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:51:11.024185 kubelet[2250]: W1008 19:51:11.024114 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:11.024185 kubelet[2250]: E1008 19:51:11.024170 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:11.090284 kubelet[2250]: W1008 19:51:11.090120 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:11.090284 kubelet[2250]: E1008 19:51:11.090180 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:11.224899 kubelet[2250]: W1008 19:51:11.224802 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:11.224899 kubelet[2250]: E1008 19:51:11.224885 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:11.302032 kubelet[2250]: W1008 19:51:11.301934 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:11.302032 kubelet[2250]: E1008 19:51:11.302028 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:11.589631 kubelet[2250]: E1008 19:51:11.589547 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Oct 8 19:51:11.594871 kubelet[2250]: I1008 19:51:11.594784 2250 policy_none.go:49] "None policy: Start" Oct 8 19:51:11.596126 kubelet[2250]: I1008 19:51:11.596079 2250 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:51:11.596317 kubelet[2250]: I1008 19:51:11.596149 2250 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:51:11.607624 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:51:11.623177 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:51:11.626930 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:51:11.636529 kubelet[2250]: I1008 19:51:11.636471 2250 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:51:11.637003 kubelet[2250]: I1008 19:51:11.636984 2250 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:51:11.638491 kubelet[2250]: E1008 19:51:11.638462 2250 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:51:11.663739 kubelet[2250]: I1008 19:51:11.663651 2250 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:51:11.664096 kubelet[2250]: E1008 19:51:11.664071 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Oct 8 19:51:11.697460 kubelet[2250]: I1008 19:51:11.697358 2250 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:51:11.699143 kubelet[2250]: I1008 19:51:11.699091 2250 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:51:11.699965 kubelet[2250]: I1008 19:51:11.699936 2250 topology_manager.go:215] "Topology Admit Handler" podUID="089b3539f42735821a4ea51ceb975cde" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:51:11.707962 systemd[1]: Created slice kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice - libcontainer container kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice. Oct 8 19:51:11.742906 systemd[1]: Created slice kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice - libcontainer container kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice. Oct 8 19:51:11.747961 systemd[1]: Created slice kubepods-burstable-pod089b3539f42735821a4ea51ceb975cde.slice - libcontainer container kubepods-burstable-pod089b3539f42735821a4ea51ceb975cde.slice. Oct 8 19:51:11.764429 kubelet[2250]: I1008 19:51:11.764375 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:11.764429 kubelet[2250]: I1008 19:51:11.764430 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/089b3539f42735821a4ea51ceb975cde-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"089b3539f42735821a4ea51ceb975cde\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:51:11.764638 kubelet[2250]: I1008 19:51:11.764468 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/089b3539f42735821a4ea51ceb975cde-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"089b3539f42735821a4ea51ceb975cde\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:51:11.764638 kubelet[2250]: I1008 19:51:11.764497 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/089b3539f42735821a4ea51ceb975cde-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"089b3539f42735821a4ea51ceb975cde\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:51:11.764638 kubelet[2250]: I1008 19:51:11.764522 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:11.764638 kubelet[2250]: I1008 19:51:11.764565 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:11.764638 kubelet[2250]: I1008 19:51:11.764602 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:11.764813 kubelet[2250]: I1008 19:51:11.764632 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:11.764813 kubelet[2250]: I1008 19:51:11.764659 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:51:12.040383 kubelet[2250]: E1008 19:51:12.040297 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:12.041286 containerd[1464]: time="2024-10-08T19:51:12.041223608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 8 19:51:12.046626 kubelet[2250]: E1008 19:51:12.046561 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:12.047309 containerd[1464]: time="2024-10-08T19:51:12.047251392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 8 19:51:12.050632 kubelet[2250]: E1008 19:51:12.050565 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:12.051218 containerd[1464]: time="2024-10-08T19:51:12.051145963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:089b3539f42735821a4ea51ceb975cde,Namespace:kube-system,Attempt:0,}" Oct 8 19:51:12.238735 kubelet[2250]: E1008 19:51:12.238663 2250 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:12.620484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984783478.mount: Deactivated successfully. Oct 8 19:51:12.628965 containerd[1464]: time="2024-10-08T19:51:12.628903315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:51:12.629960 containerd[1464]: time="2024-10-08T19:51:12.629904978Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:51:12.630873 containerd[1464]: time="2024-10-08T19:51:12.630778078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 8 19:51:12.631847 containerd[1464]: time="2024-10-08T19:51:12.631814187Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:51:12.632840 containerd[1464]: time="2024-10-08T19:51:12.632757529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:51:12.633808 containerd[1464]: time="2024-10-08T19:51:12.633778670Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:51:12.634771 containerd[1464]: time="2024-10-08T19:51:12.634722883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:51:12.637398 containerd[1464]: time="2024-10-08T19:51:12.637367228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:51:12.639288 containerd[1464]: time="2024-10-08T19:51:12.639256298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.008782ms" Oct 8 19:51:12.639888 containerd[1464]: time="2024-10-08T19:51:12.639850758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.505448ms" Oct 8 19:51:12.640514 containerd[1464]: time="2024-10-08T19:51:12.640448914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 599.118444ms" Oct 8 19:51:12.778219 containerd[1464]: time="2024-10-08T19:51:12.777900605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:51:12.778219 containerd[1464]: time="2024-10-08T19:51:12.777972432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:51:12.778219 containerd[1464]: time="2024-10-08T19:51:12.777984474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:12.778219 containerd[1464]: time="2024-10-08T19:51:12.778068384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:12.778219 containerd[1464]: time="2024-10-08T19:51:12.777781529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:51:12.778219 containerd[1464]: time="2024-10-08T19:51:12.777832946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:51:12.778219 containerd[1464]: time="2024-10-08T19:51:12.777842945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:12.778219 containerd[1464]: time="2024-10-08T19:51:12.777912778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:12.779312 containerd[1464]: time="2024-10-08T19:51:12.779049077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:51:12.779312 containerd[1464]: time="2024-10-08T19:51:12.779112167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:51:12.779312 containerd[1464]: time="2024-10-08T19:51:12.779130682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:12.779312 containerd[1464]: time="2024-10-08T19:51:12.779223509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:12.802901 systemd[1]: Started cri-containerd-ad7ba3a564bab2a40f9111ad83801756bdfeeed622d4a0543977716738794aba.scope - libcontainer container ad7ba3a564bab2a40f9111ad83801756bdfeeed622d4a0543977716738794aba. Oct 8 19:51:12.807758 systemd[1]: Started cri-containerd-18c24d261a1c5a549890283b0910a57f90c2b20cdd7956e9db590ac8af0f05ff.scope - libcontainer container 18c24d261a1c5a549890283b0910a57f90c2b20cdd7956e9db590ac8af0f05ff. Oct 8 19:51:12.810087 systemd[1]: Started cri-containerd-c47523cf85be5fe0b49b0f2319002c0d13fd88d93d86bdb94280736cff389745.scope - libcontainer container c47523cf85be5fe0b49b0f2319002c0d13fd88d93d86bdb94280736cff389745. Oct 8 19:51:12.850823 containerd[1464]: time="2024-10-08T19:51:12.850568751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad7ba3a564bab2a40f9111ad83801756bdfeeed622d4a0543977716738794aba\"" Oct 8 19:51:12.854101 kubelet[2250]: E1008 19:51:12.854066 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:12.856975 containerd[1464]: time="2024-10-08T19:51:12.856931072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:089b3539f42735821a4ea51ceb975cde,Namespace:kube-system,Attempt:0,} returns sandbox id \"18c24d261a1c5a549890283b0910a57f90c2b20cdd7956e9db590ac8af0f05ff\"" Oct 8 19:51:12.858652 kubelet[2250]: E1008 19:51:12.858617 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:12.858991 containerd[1464]: time="2024-10-08T19:51:12.858962671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c47523cf85be5fe0b49b0f2319002c0d13fd88d93d86bdb94280736cff389745\"" Oct 8 19:51:12.860219 containerd[1464]: time="2024-10-08T19:51:12.859853644Z" level=info msg="CreateContainer within sandbox \"ad7ba3a564bab2a40f9111ad83801756bdfeeed622d4a0543977716738794aba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:51:12.861032 kubelet[2250]: E1008 19:51:12.861004 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:12.864597 containerd[1464]: time="2024-10-08T19:51:12.864556932Z" level=info msg="CreateContainer within sandbox \"18c24d261a1c5a549890283b0910a57f90c2b20cdd7956e9db590ac8af0f05ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:51:12.865108 containerd[1464]: time="2024-10-08T19:51:12.864989484Z" level=info msg="CreateContainer within sandbox \"c47523cf85be5fe0b49b0f2319002c0d13fd88d93d86bdb94280736cff389745\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:51:12.891951 containerd[1464]: time="2024-10-08T19:51:12.891779451Z" level=info msg="CreateContainer within sandbox \"ad7ba3a564bab2a40f9111ad83801756bdfeeed622d4a0543977716738794aba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"82fc18d238bae0ec0ac75a9a030d49ae4dc17c0bd16699e6945e2e6b31d2c4f8\"" Oct 8 19:51:12.892686 containerd[1464]: time="2024-10-08T19:51:12.892647361Z" level=info msg="StartContainer for \"82fc18d238bae0ec0ac75a9a030d49ae4dc17c0bd16699e6945e2e6b31d2c4f8\"" Oct 8 19:51:12.901815 containerd[1464]: time="2024-10-08T19:51:12.901764034Z" level=info msg="CreateContainer within sandbox \"c47523cf85be5fe0b49b0f2319002c0d13fd88d93d86bdb94280736cff389745\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea8595355cf1a424ac1c678996486d7c681bbd840b7a396e9240921cb3196f58\"" Oct 8 19:51:12.902813 containerd[1464]: time="2024-10-08T19:51:12.902725311Z" level=info msg="CreateContainer within sandbox \"18c24d261a1c5a549890283b0910a57f90c2b20cdd7956e9db590ac8af0f05ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6151a8fc160951a6ee1bcca2a6f56c0c50241b87a41667e8913650a2e9273252\"" Oct 8 19:51:12.902922 containerd[1464]: time="2024-10-08T19:51:12.902889042Z" level=info msg="StartContainer for \"ea8595355cf1a424ac1c678996486d7c681bbd840b7a396e9240921cb3196f58\"" Oct 8 19:51:12.903280 containerd[1464]: time="2024-10-08T19:51:12.903245309Z" level=info msg="StartContainer for \"6151a8fc160951a6ee1bcca2a6f56c0c50241b87a41667e8913650a2e9273252\"" Oct 8 19:51:12.924025 systemd[1]: Started cri-containerd-82fc18d238bae0ec0ac75a9a030d49ae4dc17c0bd16699e6945e2e6b31d2c4f8.scope - libcontainer container 82fc18d238bae0ec0ac75a9a030d49ae4dc17c0bd16699e6945e2e6b31d2c4f8. Oct 8 19:51:12.955129 systemd[1]: Started cri-containerd-6151a8fc160951a6ee1bcca2a6f56c0c50241b87a41667e8913650a2e9273252.scope - libcontainer container 6151a8fc160951a6ee1bcca2a6f56c0c50241b87a41667e8913650a2e9273252. Oct 8 19:51:12.959315 systemd[1]: Started cri-containerd-ea8595355cf1a424ac1c678996486d7c681bbd840b7a396e9240921cb3196f58.scope - libcontainer container ea8595355cf1a424ac1c678996486d7c681bbd840b7a396e9240921cb3196f58. Oct 8 19:51:13.001214 containerd[1464]: time="2024-10-08T19:51:13.001147024Z" level=info msg="StartContainer for \"82fc18d238bae0ec0ac75a9a030d49ae4dc17c0bd16699e6945e2e6b31d2c4f8\" returns successfully" Oct 8 19:51:13.022484 containerd[1464]: time="2024-10-08T19:51:13.022416358Z" level=info msg="StartContainer for \"ea8595355cf1a424ac1c678996486d7c681bbd840b7a396e9240921cb3196f58\" returns successfully" Oct 8 19:51:13.022484 containerd[1464]: time="2024-10-08T19:51:13.022453228Z" level=info msg="StartContainer for \"6151a8fc160951a6ee1bcca2a6f56c0c50241b87a41667e8913650a2e9273252\" returns successfully" Oct 8 19:51:13.032340 kubelet[2250]: W1008 19:51:13.032263 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:13.032340 kubelet[2250]: E1008 19:51:13.032342 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:13.058803 kubelet[2250]: W1008 19:51:13.058685 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:13.058803 kubelet[2250]: E1008 19:51:13.058783 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Oct 8 19:51:13.213846 kubelet[2250]: E1008 19:51:13.211949 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:13.218813 kubelet[2250]: E1008 19:51:13.218776 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:13.222305 kubelet[2250]: E1008 19:51:13.222272 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:13.266604 kubelet[2250]: I1008 19:51:13.266561 2250 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:51:13.966276 update_engine[1444]: I20241008 19:51:13.966123 1444 update_attempter.cc:509] Updating boot flags... Oct 8 19:51:14.046767 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2535) Oct 8 19:51:14.137786 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2540) Oct 8 19:51:14.206830 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2540) Oct 8 19:51:14.228337 kubelet[2250]: E1008 19:51:14.228215 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:14.614663 kubelet[2250]: E1008 19:51:14.614622 2250 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:51:14.733998 kubelet[2250]: I1008 19:51:14.733915 2250 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:51:14.846159 kubelet[2250]: E1008 19:51:14.844023 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:14.944489 kubelet[2250]: E1008 19:51:14.944312 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.045101 kubelet[2250]: E1008 19:51:15.045043 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.145394 kubelet[2250]: E1008 19:51:15.145277 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.246608 kubelet[2250]: E1008 19:51:15.246431 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.347313 kubelet[2250]: E1008 19:51:15.347227 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.447975 kubelet[2250]: E1008 19:51:15.447905 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.548801 kubelet[2250]: E1008 19:51:15.548584 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.649621 kubelet[2250]: E1008 19:51:15.649561 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.749860 kubelet[2250]: E1008 19:51:15.749774 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.850854 kubelet[2250]: E1008 19:51:15.850804 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:15.951575 kubelet[2250]: E1008 19:51:15.951516 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.010141 kubelet[2250]: E1008 19:51:16.010089 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:16.052204 kubelet[2250]: E1008 19:51:16.052129 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.153437 kubelet[2250]: E1008 19:51:16.153216 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.253432 kubelet[2250]: E1008 19:51:16.253371 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.353853 kubelet[2250]: E1008 19:51:16.353771 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.454440 kubelet[2250]: E1008 19:51:16.454274 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.555103 kubelet[2250]: E1008 19:51:16.554996 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.656221 kubelet[2250]: E1008 19:51:16.656128 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.756962 kubelet[2250]: E1008 19:51:16.756773 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.857593 kubelet[2250]: E1008 19:51:16.857498 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:16.958200 kubelet[2250]: E1008 19:51:16.958118 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:17.059398 kubelet[2250]: E1008 19:51:17.059222 2250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:51:17.146408 kubelet[2250]: I1008 19:51:17.146354 2250 apiserver.go:52] "Watching apiserver" Oct 8 19:51:17.156788 kubelet[2250]: I1008 19:51:17.156753 2250 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:51:17.626716 systemd[1]: Reloading requested from client PID 2544 ('systemctl') (unit session-9.scope)... Oct 8 19:51:17.626735 systemd[1]: Reloading... Oct 8 19:51:17.712741 zram_generator::config[2583]: No configuration found. Oct 8 19:51:17.828315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:51:17.929857 systemd[1]: Reloading finished in 302 ms. Oct 8 19:51:17.976079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:17.992518 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:51:17.992888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:18.003033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:18.173248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:18.180501 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:51:18.236889 kubelet[2628]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:51:18.236889 kubelet[2628]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:51:18.236889 kubelet[2628]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:51:18.237480 kubelet[2628]: I1008 19:51:18.236940 2628 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:51:18.242881 kubelet[2628]: I1008 19:51:18.242829 2628 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:51:18.242881 kubelet[2628]: I1008 19:51:18.242861 2628 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:51:18.243156 kubelet[2628]: I1008 19:51:18.243131 2628 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:51:18.245189 kubelet[2628]: I1008 19:51:18.245157 2628 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:51:18.249943 kubelet[2628]: I1008 19:51:18.249255 2628 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:51:18.259127 kubelet[2628]: I1008 19:51:18.259060 2628 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:51:18.259423 kubelet[2628]: I1008 19:51:18.259396 2628 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:51:18.259610 kubelet[2628]: I1008 19:51:18.259591 2628 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:51:18.259755 kubelet[2628]: I1008 19:51:18.259624 2628 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:51:18.259755 kubelet[2628]: I1008 19:51:18.259635 2628 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:51:18.259755 kubelet[2628]: I1008 19:51:18.259685 2628 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:51:18.259847 kubelet[2628]: I1008 19:51:18.259823 2628 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:51:18.259847 kubelet[2628]: I1008 19:51:18.259842 2628 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:51:18.259982 kubelet[2628]: I1008 19:51:18.259869 2628 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:51:18.259982 kubelet[2628]: I1008 19:51:18.259886 2628 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:51:18.261447 kubelet[2628]: I1008 19:51:18.261406 2628 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:51:18.261646 kubelet[2628]: I1008 19:51:18.261624 2628 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:51:18.262312 kubelet[2628]: I1008 19:51:18.262288 2628 server.go:1256] "Started kubelet" Oct 8 19:51:18.262672 kubelet[2628]: I1008 19:51:18.262589 2628 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:51:18.267073 kubelet[2628]: I1008 19:51:18.264893 2628 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:51:18.267073 kubelet[2628]: I1008 19:51:18.265220 2628 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:51:18.267073 kubelet[2628]: I1008 19:51:18.266537 2628 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:51:18.272750 kubelet[2628]: I1008 19:51:18.271579 2628 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:51:18.278582 kubelet[2628]: I1008 19:51:18.278530 2628 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:51:18.279083 kubelet[2628]: I1008 19:51:18.278988 2628 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:51:18.279801 kubelet[2628]: I1008 19:51:18.279524 2628 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:51:18.281929 kubelet[2628]: I1008 19:51:18.281761 2628 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:51:18.281929 kubelet[2628]: I1008 19:51:18.281869 2628 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:51:18.284553 kubelet[2628]: I1008 19:51:18.284537 2628 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:51:18.286448 kubelet[2628]: E1008 19:51:18.285806 2628 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:51:18.291575 kubelet[2628]: I1008 19:51:18.291529 2628 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:51:18.293013 kubelet[2628]: I1008 19:51:18.292991 2628 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:51:18.293093 kubelet[2628]: I1008 19:51:18.293028 2628 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:51:18.293093 kubelet[2628]: I1008 19:51:18.293051 2628 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:51:18.293153 kubelet[2628]: E1008 19:51:18.293116 2628 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:51:18.326264 kubelet[2628]: I1008 19:51:18.326219 2628 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:51:18.326264 kubelet[2628]: I1008 19:51:18.326252 2628 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:51:18.326264 kubelet[2628]: I1008 19:51:18.326273 2628 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:51:18.326503 kubelet[2628]: I1008 19:51:18.326466 2628 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:51:18.326503 kubelet[2628]: I1008 19:51:18.326492 2628 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:51:18.326503 kubelet[2628]: I1008 19:51:18.326501 2628 policy_none.go:49] "None policy: Start" Oct 8 19:51:18.327274 kubelet[2628]: I1008 19:51:18.327243 2628 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:51:18.327326 kubelet[2628]: I1008 19:51:18.327297 2628 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:51:18.327597 kubelet[2628]: I1008 19:51:18.327568 2628 state_mem.go:75] "Updated machine memory state" Oct 8 19:51:18.332894 kubelet[2628]: I1008 19:51:18.332866 2628 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:51:18.333281 kubelet[2628]: I1008 19:51:18.333145 2628 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:51:18.385077 kubelet[2628]: I1008 19:51:18.384947 2628 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:51:18.394195 kubelet[2628]: I1008 19:51:18.394104 2628 topology_manager.go:215] "Topology Admit Handler" podUID="089b3539f42735821a4ea51ceb975cde" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:51:18.394400 kubelet[2628]: I1008 19:51:18.394231 2628 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:51:18.394400 kubelet[2628]: I1008 19:51:18.394276 2628 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:51:18.397251 kubelet[2628]: I1008 19:51:18.395993 2628 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:51:18.397251 kubelet[2628]: I1008 19:51:18.396090 2628 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:51:18.583473 kubelet[2628]: I1008 19:51:18.583275 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:18.583473 kubelet[2628]: I1008 19:51:18.583343 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:18.583473 kubelet[2628]: I1008 19:51:18.583370 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:18.583473 kubelet[2628]: I1008 19:51:18.583396 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/089b3539f42735821a4ea51ceb975cde-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"089b3539f42735821a4ea51ceb975cde\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:51:18.583473 kubelet[2628]: I1008 19:51:18.583440 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/089b3539f42735821a4ea51ceb975cde-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"089b3539f42735821a4ea51ceb975cde\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:51:18.583850 kubelet[2628]: I1008 19:51:18.583460 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:18.583850 kubelet[2628]: I1008 19:51:18.583487 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:51:18.583850 kubelet[2628]: I1008 19:51:18.583510 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/089b3539f42735821a4ea51ceb975cde-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"089b3539f42735821a4ea51ceb975cde\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:51:18.583850 kubelet[2628]: I1008 19:51:18.583534 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:51:18.726071 kubelet[2628]: E1008 19:51:18.725985 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:18.726071 kubelet[2628]: E1008 19:51:18.726035 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:18.726490 kubelet[2628]: E1008 19:51:18.726417 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:19.262068 kubelet[2628]: I1008 19:51:19.261986 2628 apiserver.go:52] "Watching apiserver" Oct 8 19:51:19.279983 kubelet[2628]: I1008 19:51:19.279932 2628 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:51:19.311863 kubelet[2628]: E1008 19:51:19.311816 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:19.314982 kubelet[2628]: E1008 19:51:19.314951 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:19.323238 kubelet[2628]: E1008 19:51:19.323193 2628 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:51:19.323727 kubelet[2628]: E1008 19:51:19.323686 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:19.359796 kubelet[2628]: I1008 19:51:19.359468 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.359417086 podStartE2EDuration="1.359417086s" podCreationTimestamp="2024-10-08 19:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:51:19.346438111 +0000 UTC m=+1.160256619" watchObservedRunningTime="2024-10-08 19:51:19.359417086 +0000 UTC m=+1.173235594" Oct 8 19:51:19.368249 kubelet[2628]: I1008 19:51:19.368151 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.368099487 podStartE2EDuration="1.368099487s" podCreationTimestamp="2024-10-08 19:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:51:19.360160332 +0000 UTC m=+1.173978840" watchObservedRunningTime="2024-10-08 19:51:19.368099487 +0000 UTC m=+1.181917995" Oct 8 19:51:19.368249 kubelet[2628]: I1008 19:51:19.368226 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.368210297 podStartE2EDuration="1.368210297s" podCreationTimestamp="2024-10-08 19:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:51:19.368200768 +0000 UTC m=+1.182019276" watchObservedRunningTime="2024-10-08 19:51:19.368210297 +0000 UTC m=+1.182028805" Oct 8 19:51:20.314727 kubelet[2628]: E1008 19:51:20.312566 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:20.374346 kubelet[2628]: E1008 19:51:20.374199 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:22.630009 sudo[1655]: pam_unix(sudo:session): session closed for user root Oct 8 19:51:22.633101 sshd[1652]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:22.638754 systemd[1]: sshd@8-10.0.0.19:22-10.0.0.1:45702.service: Deactivated successfully. Oct 8 19:51:22.641181 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:51:22.641384 systemd[1]: session-9.scope: Consumed 5.738s CPU time, 191.5M memory peak, 0B memory swap peak. Oct 8 19:51:22.641907 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:51:22.643363 systemd-logind[1442]: Removed session 9. Oct 8 19:51:25.727949 kubelet[2628]: E1008 19:51:25.727887 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:26.323290 kubelet[2628]: E1008 19:51:26.323238 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:27.616035 kubelet[2628]: E1008 19:51:27.615977 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:28.326557 kubelet[2628]: E1008 19:51:28.326490 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:30.379382 kubelet[2628]: E1008 19:51:30.379331 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:31.919271 kubelet[2628]: I1008 19:51:31.919216 2628 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:51:31.919894 containerd[1464]: time="2024-10-08T19:51:31.919723263Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:51:31.920201 kubelet[2628]: I1008 19:51:31.919996 2628 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:51:33.203511 kubelet[2628]: I1008 19:51:33.203443 2628 topology_manager.go:215] "Topology Admit Handler" podUID="8f146a88-f561-4bef-938f-6ea73d148545" podNamespace="kube-system" podName="kube-proxy-lk28n" Oct 8 19:51:33.212845 systemd[1]: Created slice kubepods-besteffort-pod8f146a88_f561_4bef_938f_6ea73d148545.slice - libcontainer container kubepods-besteffort-pod8f146a88_f561_4bef_938f_6ea73d148545.slice. Oct 8 19:51:33.279875 kubelet[2628]: I1008 19:51:33.279807 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f146a88-f561-4bef-938f-6ea73d148545-kube-proxy\") pod \"kube-proxy-lk28n\" (UID: \"8f146a88-f561-4bef-938f-6ea73d148545\") " pod="kube-system/kube-proxy-lk28n" Oct 8 19:51:33.279875 kubelet[2628]: I1008 19:51:33.279862 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f146a88-f561-4bef-938f-6ea73d148545-lib-modules\") pod \"kube-proxy-lk28n\" (UID: \"8f146a88-f561-4bef-938f-6ea73d148545\") " pod="kube-system/kube-proxy-lk28n" Oct 8 19:51:33.279875 kubelet[2628]: I1008 19:51:33.279893 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gkg\" (UniqueName: \"kubernetes.io/projected/8f146a88-f561-4bef-938f-6ea73d148545-kube-api-access-66gkg\") pod \"kube-proxy-lk28n\" (UID: \"8f146a88-f561-4bef-938f-6ea73d148545\") " pod="kube-system/kube-proxy-lk28n" Oct 8 19:51:33.280110 kubelet[2628]: I1008 19:51:33.279916 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f146a88-f561-4bef-938f-6ea73d148545-xtables-lock\") pod \"kube-proxy-lk28n\" (UID: \"8f146a88-f561-4bef-938f-6ea73d148545\") " pod="kube-system/kube-proxy-lk28n" Oct 8 19:51:33.354106 kubelet[2628]: I1008 19:51:33.352503 2628 topology_manager.go:215] "Topology Admit Handler" podUID="0a91bd35-88b4-4b4f-a4d1-0abedec2397c" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-mr76d" Oct 8 19:51:33.354630 kubelet[2628]: W1008 19:51:33.354608 2628 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Oct 8 19:51:33.355270 kubelet[2628]: E1008 19:51:33.355249 2628 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Oct 8 19:51:33.361514 systemd[1]: Created slice kubepods-besteffort-pod0a91bd35_88b4_4b4f_a4d1_0abedec2397c.slice - libcontainer container kubepods-besteffort-pod0a91bd35_88b4_4b4f_a4d1_0abedec2397c.slice. Oct 8 19:51:33.380666 kubelet[2628]: I1008 19:51:33.380573 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a91bd35-88b4-4b4f-a4d1-0abedec2397c-var-lib-calico\") pod \"tigera-operator-5d56685c77-mr76d\" (UID: \"0a91bd35-88b4-4b4f-a4d1-0abedec2397c\") " pod="tigera-operator/tigera-operator-5d56685c77-mr76d" Oct 8 19:51:33.380666 kubelet[2628]: I1008 19:51:33.380670 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5jf\" (UniqueName: \"kubernetes.io/projected/0a91bd35-88b4-4b4f-a4d1-0abedec2397c-kube-api-access-6z5jf\") pod \"tigera-operator-5d56685c77-mr76d\" (UID: \"0a91bd35-88b4-4b4f-a4d1-0abedec2397c\") " pod="tigera-operator/tigera-operator-5d56685c77-mr76d" Oct 8 19:51:33.665546 containerd[1464]: time="2024-10-08T19:51:33.665465684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-mr76d,Uid:0a91bd35-88b4-4b4f-a4d1-0abedec2397c,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:51:33.829155 kubelet[2628]: E1008 19:51:33.829115 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:33.829861 containerd[1464]: time="2024-10-08T19:51:33.829825491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lk28n,Uid:8f146a88-f561-4bef-938f-6ea73d148545,Namespace:kube-system,Attempt:0,}" Oct 8 19:51:34.565115 containerd[1464]: time="2024-10-08T19:51:34.565007559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:51:34.565115 containerd[1464]: time="2024-10-08T19:51:34.565085807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:51:34.565115 containerd[1464]: time="2024-10-08T19:51:34.565101516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:34.565417 containerd[1464]: time="2024-10-08T19:51:34.565221873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:34.592945 systemd[1]: Started cri-containerd-2d3d896472e9bf239a47a3e36efd69a6f3a5deec23215fda17d31440507f0fb9.scope - libcontainer container 2d3d896472e9bf239a47a3e36efd69a6f3a5deec23215fda17d31440507f0fb9. Oct 8 19:51:34.596572 containerd[1464]: time="2024-10-08T19:51:34.595507000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:51:34.596572 containerd[1464]: time="2024-10-08T19:51:34.595577854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:51:34.596572 containerd[1464]: time="2024-10-08T19:51:34.595594736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:34.597063 containerd[1464]: time="2024-10-08T19:51:34.596983588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:34.626018 systemd[1]: Started cri-containerd-fd9fa28a4d1cf5b6feb50242d65baa99489203cddf743c350194b69e0c6303ce.scope - libcontainer container fd9fa28a4d1cf5b6feb50242d65baa99489203cddf743c350194b69e0c6303ce. Oct 8 19:51:34.641911 containerd[1464]: time="2024-10-08T19:51:34.641843937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-mr76d,Uid:0a91bd35-88b4-4b4f-a4d1-0abedec2397c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2d3d896472e9bf239a47a3e36efd69a6f3a5deec23215fda17d31440507f0fb9\"" Oct 8 19:51:34.644091 containerd[1464]: time="2024-10-08T19:51:34.644039488Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:51:34.656991 containerd[1464]: time="2024-10-08T19:51:34.656837807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lk28n,Uid:8f146a88-f561-4bef-938f-6ea73d148545,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd9fa28a4d1cf5b6feb50242d65baa99489203cddf743c350194b69e0c6303ce\"" Oct 8 19:51:34.657813 kubelet[2628]: E1008 19:51:34.657790 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:34.659972 containerd[1464]: time="2024-10-08T19:51:34.659934372Z" level=info msg="CreateContainer within sandbox \"fd9fa28a4d1cf5b6feb50242d65baa99489203cddf743c350194b69e0c6303ce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:51:34.858419 containerd[1464]: time="2024-10-08T19:51:34.858305632Z" level=info msg="CreateContainer within sandbox \"fd9fa28a4d1cf5b6feb50242d65baa99489203cddf743c350194b69e0c6303ce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd1a3ea1e9cf2fb8839aef60fc068f387be7b5d009607a69e6bf17fcf7c7405b\"" Oct 8 19:51:34.859632 containerd[1464]: time="2024-10-08T19:51:34.859279745Z" level=info msg="StartContainer for \"bd1a3ea1e9cf2fb8839aef60fc068f387be7b5d009607a69e6bf17fcf7c7405b\"" Oct 8 19:51:34.903026 systemd[1]: Started cri-containerd-bd1a3ea1e9cf2fb8839aef60fc068f387be7b5d009607a69e6bf17fcf7c7405b.scope - libcontainer container bd1a3ea1e9cf2fb8839aef60fc068f387be7b5d009607a69e6bf17fcf7c7405b. Oct 8 19:51:34.942568 containerd[1464]: time="2024-10-08T19:51:34.942494743Z" level=info msg="StartContainer for \"bd1a3ea1e9cf2fb8839aef60fc068f387be7b5d009607a69e6bf17fcf7c7405b\" returns successfully" Oct 8 19:51:35.340359 kubelet[2628]: E1008 19:51:35.340201 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:35.380922 kubelet[2628]: I1008 19:51:35.380866 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lk28n" podStartSLOduration=3.380814232 podStartE2EDuration="3.380814232s" podCreationTimestamp="2024-10-08 19:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:51:35.380594158 +0000 UTC m=+17.194412666" watchObservedRunningTime="2024-10-08 19:51:35.380814232 +0000 UTC m=+17.194632740" Oct 8 19:51:37.298636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785347520.mount: Deactivated successfully. Oct 8 19:51:38.806482 containerd[1464]: time="2024-10-08T19:51:38.806402996Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:38.807936 containerd[1464]: time="2024-10-08T19:51:38.807891334Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136573" Oct 8 19:51:38.809786 containerd[1464]: time="2024-10-08T19:51:38.809743617Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:38.812938 containerd[1464]: time="2024-10-08T19:51:38.812901162Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:38.814112 containerd[1464]: time="2024-10-08T19:51:38.814043400Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 4.169968405s" Oct 8 19:51:38.814112 containerd[1464]: time="2024-10-08T19:51:38.814085399Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 19:51:38.838567 containerd[1464]: time="2024-10-08T19:51:38.838503175Z" level=info msg="CreateContainer within sandbox \"2d3d896472e9bf239a47a3e36efd69a6f3a5deec23215fda17d31440507f0fb9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:51:39.362053 containerd[1464]: time="2024-10-08T19:51:39.361961302Z" level=info msg="CreateContainer within sandbox \"2d3d896472e9bf239a47a3e36efd69a6f3a5deec23215fda17d31440507f0fb9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ec069ba6ac7e4b60b9ba6018f793b838dd4a8081569ee47c61e1363be261fa75\"" Oct 8 19:51:39.363516 containerd[1464]: time="2024-10-08T19:51:39.362799608Z" level=info msg="StartContainer for \"ec069ba6ac7e4b60b9ba6018f793b838dd4a8081569ee47c61e1363be261fa75\"" Oct 8 19:51:39.398192 systemd[1]: Started cri-containerd-ec069ba6ac7e4b60b9ba6018f793b838dd4a8081569ee47c61e1363be261fa75.scope - libcontainer container ec069ba6ac7e4b60b9ba6018f793b838dd4a8081569ee47c61e1363be261fa75. Oct 8 19:51:39.836335 containerd[1464]: time="2024-10-08T19:51:39.836154659Z" level=info msg="StartContainer for \"ec069ba6ac7e4b60b9ba6018f793b838dd4a8081569ee47c61e1363be261fa75\" returns successfully" Oct 8 19:51:42.872231 kubelet[2628]: I1008 19:51:42.872159 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-mr76d" podStartSLOduration=5.694746196 podStartE2EDuration="9.872080891s" podCreationTimestamp="2024-10-08 19:51:33 +0000 UTC" firstStartedPulling="2024-10-08 19:51:34.643568011 +0000 UTC m=+16.457386519" lastFinishedPulling="2024-10-08 19:51:38.820902706 +0000 UTC m=+20.634721214" observedRunningTime="2024-10-08 19:51:40.428326236 +0000 UTC m=+22.242144744" watchObservedRunningTime="2024-10-08 19:51:42.872080891 +0000 UTC m=+24.685899399" Oct 8 19:51:42.872883 kubelet[2628]: I1008 19:51:42.872394 2628 topology_manager.go:215] "Topology Admit Handler" podUID="a0ab4a8b-e00b-4059-b273-71aef73c6aba" podNamespace="calico-system" podName="calico-typha-747fbdf85-6klj9" Oct 8 19:51:42.883789 systemd[1]: Created slice kubepods-besteffort-poda0ab4a8b_e00b_4059_b273_71aef73c6aba.slice - libcontainer container kubepods-besteffort-poda0ab4a8b_e00b_4059_b273_71aef73c6aba.slice. Oct 8 19:51:42.939303 kubelet[2628]: I1008 19:51:42.939242 2628 topology_manager.go:215] "Topology Admit Handler" podUID="22022a3f-d85f-4c82-9933-e8ef37d67416" podNamespace="calico-system" podName="calico-node-thbcg" Oct 8 19:51:42.944864 kubelet[2628]: I1008 19:51:42.944576 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/22022a3f-d85f-4c82-9933-e8ef37d67416-node-certs\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.944864 kubelet[2628]: I1008 19:51:42.944625 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22022a3f-d85f-4c82-9933-e8ef37d67416-var-lib-calico\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.944864 kubelet[2628]: I1008 19:51:42.944652 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctwf8\" (UniqueName: \"kubernetes.io/projected/22022a3f-d85f-4c82-9933-e8ef37d67416-kube-api-access-ctwf8\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.944864 kubelet[2628]: I1008 19:51:42.944676 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22022a3f-d85f-4c82-9933-e8ef37d67416-lib-modules\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.944864 kubelet[2628]: I1008 19:51:42.944724 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/22022a3f-d85f-4c82-9933-e8ef37d67416-cni-bin-dir\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.945261 kubelet[2628]: I1008 19:51:42.944800 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22022a3f-d85f-4c82-9933-e8ef37d67416-tigera-ca-bundle\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.945261 kubelet[2628]: I1008 19:51:42.944834 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/22022a3f-d85f-4c82-9933-e8ef37d67416-cni-log-dir\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.945261 kubelet[2628]: I1008 19:51:42.944880 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a0ab4a8b-e00b-4059-b273-71aef73c6aba-typha-certs\") pod \"calico-typha-747fbdf85-6klj9\" (UID: \"a0ab4a8b-e00b-4059-b273-71aef73c6aba\") " pod="calico-system/calico-typha-747fbdf85-6klj9" Oct 8 19:51:42.945261 kubelet[2628]: I1008 19:51:42.945025 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrtr6\" (UniqueName: \"kubernetes.io/projected/a0ab4a8b-e00b-4059-b273-71aef73c6aba-kube-api-access-qrtr6\") pod \"calico-typha-747fbdf85-6klj9\" (UID: \"a0ab4a8b-e00b-4059-b273-71aef73c6aba\") " pod="calico-system/calico-typha-747fbdf85-6klj9" Oct 8 19:51:42.945261 kubelet[2628]: I1008 19:51:42.945053 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/22022a3f-d85f-4c82-9933-e8ef37d67416-cni-net-dir\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.945382 kubelet[2628]: I1008 19:51:42.945080 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/22022a3f-d85f-4c82-9933-e8ef37d67416-flexvol-driver-host\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.945382 kubelet[2628]: I1008 19:51:42.945106 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ab4a8b-e00b-4059-b273-71aef73c6aba-tigera-ca-bundle\") pod \"calico-typha-747fbdf85-6klj9\" (UID: \"a0ab4a8b-e00b-4059-b273-71aef73c6aba\") " pod="calico-system/calico-typha-747fbdf85-6klj9" Oct 8 19:51:42.945382 kubelet[2628]: I1008 19:51:42.945130 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22022a3f-d85f-4c82-9933-e8ef37d67416-xtables-lock\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.945382 kubelet[2628]: I1008 19:51:42.945154 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/22022a3f-d85f-4c82-9933-e8ef37d67416-policysync\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.945382 kubelet[2628]: I1008 19:51:42.945189 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/22022a3f-d85f-4c82-9933-e8ef37d67416-var-run-calico\") pod \"calico-node-thbcg\" (UID: \"22022a3f-d85f-4c82-9933-e8ef37d67416\") " pod="calico-system/calico-node-thbcg" Oct 8 19:51:42.951830 systemd[1]: Created slice kubepods-besteffort-pod22022a3f_d85f_4c82_9933_e8ef37d67416.slice - libcontainer container kubepods-besteffort-pod22022a3f_d85f_4c82_9933_e8ef37d67416.slice. Oct 8 19:51:43.060772 kubelet[2628]: E1008 19:51:43.057645 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.060772 kubelet[2628]: W1008 19:51:43.057687 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.060772 kubelet[2628]: E1008 19:51:43.057739 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.062330 kubelet[2628]: E1008 19:51:43.061800 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.062330 kubelet[2628]: W1008 19:51:43.061821 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.062330 kubelet[2628]: E1008 19:51:43.061838 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.062460 kubelet[2628]: E1008 19:51:43.062368 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.062460 kubelet[2628]: W1008 19:51:43.062380 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.062460 kubelet[2628]: E1008 19:51:43.062394 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.065105 kubelet[2628]: E1008 19:51:43.065035 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.065105 kubelet[2628]: W1008 19:51:43.065059 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.065105 kubelet[2628]: E1008 19:51:43.065074 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.078134 kubelet[2628]: I1008 19:51:43.078088 2628 topology_manager.go:215] "Topology Admit Handler" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" podNamespace="calico-system" podName="csi-node-driver-88gsg" Oct 8 19:51:43.079282 kubelet[2628]: E1008 19:51:43.078944 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:51:43.146809 kubelet[2628]: E1008 19:51:43.146592 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.146809 kubelet[2628]: W1008 19:51:43.146638 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.146809 kubelet[2628]: E1008 19:51:43.146669 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.147192 kubelet[2628]: E1008 19:51:43.147138 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.147192 kubelet[2628]: W1008 19:51:43.147153 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.147192 kubelet[2628]: E1008 19:51:43.147177 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.147482 kubelet[2628]: E1008 19:51:43.147467 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.147482 kubelet[2628]: W1008 19:51:43.147480 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.147605 kubelet[2628]: E1008 19:51:43.147492 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.147728 kubelet[2628]: E1008 19:51:43.147682 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.147728 kubelet[2628]: W1008 19:51:43.147717 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.147728 kubelet[2628]: E1008 19:51:43.147728 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.148023 kubelet[2628]: E1008 19:51:43.147976 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.148023 kubelet[2628]: W1008 19:51:43.148000 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.148023 kubelet[2628]: E1008 19:51:43.148027 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.148384 kubelet[2628]: E1008 19:51:43.148365 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.148384 kubelet[2628]: W1008 19:51:43.148379 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.148478 kubelet[2628]: E1008 19:51:43.148392 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.148686 kubelet[2628]: E1008 19:51:43.148660 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.148686 kubelet[2628]: W1008 19:51:43.148674 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.148686 kubelet[2628]: E1008 19:51:43.148685 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.148934 kubelet[2628]: E1008 19:51:43.148919 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.148934 kubelet[2628]: W1008 19:51:43.148931 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.148996 kubelet[2628]: E1008 19:51:43.148945 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.149197 kubelet[2628]: E1008 19:51:43.149180 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.149197 kubelet[2628]: W1008 19:51:43.149191 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.149256 kubelet[2628]: E1008 19:51:43.149202 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.149402 kubelet[2628]: E1008 19:51:43.149387 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.149402 kubelet[2628]: W1008 19:51:43.149398 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.149457 kubelet[2628]: E1008 19:51:43.149408 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.149614 kubelet[2628]: E1008 19:51:43.149596 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.149614 kubelet[2628]: W1008 19:51:43.149612 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.149675 kubelet[2628]: E1008 19:51:43.149624 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.149847 kubelet[2628]: E1008 19:51:43.149833 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.149847 kubelet[2628]: W1008 19:51:43.149843 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.149909 kubelet[2628]: E1008 19:51:43.149854 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.150103 kubelet[2628]: E1008 19:51:43.150088 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.150103 kubelet[2628]: W1008 19:51:43.150098 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.150177 kubelet[2628]: E1008 19:51:43.150110 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.150314 kubelet[2628]: E1008 19:51:43.150298 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.150314 kubelet[2628]: W1008 19:51:43.150311 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.150372 kubelet[2628]: E1008 19:51:43.150323 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.150516 kubelet[2628]: E1008 19:51:43.150502 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.150516 kubelet[2628]: W1008 19:51:43.150512 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.150573 kubelet[2628]: E1008 19:51:43.150522 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.150750 kubelet[2628]: E1008 19:51:43.150734 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.150750 kubelet[2628]: W1008 19:51:43.150747 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.150817 kubelet[2628]: E1008 19:51:43.150758 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.151006 kubelet[2628]: E1008 19:51:43.150989 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.151006 kubelet[2628]: W1008 19:51:43.151002 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.151061 kubelet[2628]: E1008 19:51:43.151014 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.151230 kubelet[2628]: E1008 19:51:43.151215 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.151230 kubelet[2628]: W1008 19:51:43.151227 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.151289 kubelet[2628]: E1008 19:51:43.151240 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.151441 kubelet[2628]: E1008 19:51:43.151426 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.151441 kubelet[2628]: W1008 19:51:43.151438 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.151499 kubelet[2628]: E1008 19:51:43.151449 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.151684 kubelet[2628]: E1008 19:51:43.151666 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.151684 kubelet[2628]: W1008 19:51:43.151681 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.151775 kubelet[2628]: E1008 19:51:43.151738 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.152044 kubelet[2628]: E1008 19:51:43.152025 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.152044 kubelet[2628]: W1008 19:51:43.152037 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.152130 kubelet[2628]: E1008 19:51:43.152050 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.152130 kubelet[2628]: I1008 19:51:43.152085 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae7adb50-443a-4488-8328-041f1c3fd2cd-varrun\") pod \"csi-node-driver-88gsg\" (UID: \"ae7adb50-443a-4488-8328-041f1c3fd2cd\") " pod="calico-system/csi-node-driver-88gsg" Oct 8 19:51:43.152331 kubelet[2628]: E1008 19:51:43.152311 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.152331 kubelet[2628]: W1008 19:51:43.152326 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.152427 kubelet[2628]: E1008 19:51:43.152344 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.152427 kubelet[2628]: I1008 19:51:43.152362 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae7adb50-443a-4488-8328-041f1c3fd2cd-socket-dir\") pod \"csi-node-driver-88gsg\" (UID: \"ae7adb50-443a-4488-8328-041f1c3fd2cd\") " pod="calico-system/csi-node-driver-88gsg" Oct 8 19:51:43.152590 kubelet[2628]: E1008 19:51:43.152567 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.152590 kubelet[2628]: W1008 19:51:43.152587 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.152660 kubelet[2628]: E1008 19:51:43.152605 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.152660 kubelet[2628]: I1008 19:51:43.152627 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae7adb50-443a-4488-8328-041f1c3fd2cd-registration-dir\") pod \"csi-node-driver-88gsg\" (UID: \"ae7adb50-443a-4488-8328-041f1c3fd2cd\") " pod="calico-system/csi-node-driver-88gsg" Oct 8 19:51:43.153045 kubelet[2628]: E1008 19:51:43.153015 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.153077 kubelet[2628]: W1008 19:51:43.153039 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.153103 kubelet[2628]: E1008 19:51:43.153074 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.153364 kubelet[2628]: E1008 19:51:43.153345 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.153364 kubelet[2628]: W1008 19:51:43.153359 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.153425 kubelet[2628]: E1008 19:51:43.153382 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.153892 kubelet[2628]: E1008 19:51:43.153853 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.153892 kubelet[2628]: W1008 19:51:43.153887 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.153965 kubelet[2628]: E1008 19:51:43.153920 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.154187 kubelet[2628]: E1008 19:51:43.154153 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.154187 kubelet[2628]: W1008 19:51:43.154176 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.154245 kubelet[2628]: E1008 19:51:43.154217 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.154404 kubelet[2628]: E1008 19:51:43.154383 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.154404 kubelet[2628]: W1008 19:51:43.154402 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.154452 kubelet[2628]: E1008 19:51:43.154431 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.154485 kubelet[2628]: I1008 19:51:43.154469 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae7adb50-443a-4488-8328-041f1c3fd2cd-kubelet-dir\") pod \"csi-node-driver-88gsg\" (UID: \"ae7adb50-443a-4488-8328-041f1c3fd2cd\") " pod="calico-system/csi-node-driver-88gsg" Oct 8 19:51:43.154651 kubelet[2628]: E1008 19:51:43.154631 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.154651 kubelet[2628]: W1008 19:51:43.154644 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.154782 kubelet[2628]: E1008 19:51:43.154660 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.154975 kubelet[2628]: E1008 19:51:43.154958 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.155066 kubelet[2628]: W1008 19:51:43.154975 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.155066 kubelet[2628]: E1008 19:51:43.155000 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.155268 kubelet[2628]: E1008 19:51:43.155253 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.155309 kubelet[2628]: W1008 19:51:43.155267 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.155309 kubelet[2628]: E1008 19:51:43.155287 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.155354 kubelet[2628]: I1008 19:51:43.155311 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjsxf\" (UniqueName: \"kubernetes.io/projected/ae7adb50-443a-4488-8328-041f1c3fd2cd-kube-api-access-gjsxf\") pod \"csi-node-driver-88gsg\" (UID: \"ae7adb50-443a-4488-8328-041f1c3fd2cd\") " pod="calico-system/csi-node-driver-88gsg" Oct 8 19:51:43.155593 kubelet[2628]: E1008 19:51:43.155577 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.155593 kubelet[2628]: W1008 19:51:43.155591 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.155646 kubelet[2628]: E1008 19:51:43.155609 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.155890 kubelet[2628]: E1008 19:51:43.155858 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.155890 kubelet[2628]: W1008 19:51:43.155870 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.155890 kubelet[2628]: E1008 19:51:43.155885 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.156103 kubelet[2628]: E1008 19:51:43.156089 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.156103 kubelet[2628]: W1008 19:51:43.156101 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.156151 kubelet[2628]: E1008 19:51:43.156114 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.156318 kubelet[2628]: E1008 19:51:43.156305 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.156318 kubelet[2628]: W1008 19:51:43.156314 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.156374 kubelet[2628]: E1008 19:51:43.156324 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.189499 kubelet[2628]: E1008 19:51:43.189460 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:43.190184 containerd[1464]: time="2024-10-08T19:51:43.190137629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747fbdf85-6klj9,Uid:a0ab4a8b-e00b-4059-b273-71aef73c6aba,Namespace:calico-system,Attempt:0,}" Oct 8 19:51:43.256597 kubelet[2628]: E1008 19:51:43.256550 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:43.256860 kubelet[2628]: E1008 19:51:43.256824 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.256860 kubelet[2628]: W1008 19:51:43.256841 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.256860 kubelet[2628]: E1008 19:51:43.256862 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.257120 kubelet[2628]: E1008 19:51:43.257104 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.257120 kubelet[2628]: W1008 19:51:43.257116 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.257178 kubelet[2628]: E1008 19:51:43.257171 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.257213 containerd[1464]: time="2024-10-08T19:51:43.257177248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-thbcg,Uid:22022a3f-d85f-4c82-9933-e8ef37d67416,Namespace:calico-system,Attempt:0,}" Oct 8 19:51:43.257490 kubelet[2628]: E1008 19:51:43.257457 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.257490 kubelet[2628]: W1008 19:51:43.257480 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.257571 kubelet[2628]: E1008 19:51:43.257515 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.257832 kubelet[2628]: E1008 19:51:43.257816 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.257832 kubelet[2628]: W1008 19:51:43.257829 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.257906 kubelet[2628]: E1008 19:51:43.257850 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.258094 kubelet[2628]: E1008 19:51:43.258074 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.258094 kubelet[2628]: W1008 19:51:43.258089 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.258171 kubelet[2628]: E1008 19:51:43.258106 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.258379 kubelet[2628]: E1008 19:51:43.258358 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.258379 kubelet[2628]: W1008 19:51:43.258370 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.258439 kubelet[2628]: E1008 19:51:43.258386 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.258625 kubelet[2628]: E1008 19:51:43.258607 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.258625 kubelet[2628]: W1008 19:51:43.258621 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.258862 kubelet[2628]: E1008 19:51:43.258643 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.258915 kubelet[2628]: E1008 19:51:43.258872 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.258915 kubelet[2628]: W1008 19:51:43.258883 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.258915 kubelet[2628]: E1008 19:51:43.258904 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.259145 kubelet[2628]: E1008 19:51:43.259130 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.259189 kubelet[2628]: W1008 19:51:43.259166 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.259219 kubelet[2628]: E1008 19:51:43.259187 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.259413 kubelet[2628]: E1008 19:51:43.259400 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.259413 kubelet[2628]: W1008 19:51:43.259410 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.259458 kubelet[2628]: E1008 19:51:43.259426 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.259664 kubelet[2628]: E1008 19:51:43.259643 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.259664 kubelet[2628]: W1008 19:51:43.259655 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.259777 kubelet[2628]: E1008 19:51:43.259671 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.259925 kubelet[2628]: E1008 19:51:43.259908 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.259925 kubelet[2628]: W1008 19:51:43.259919 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.260000 kubelet[2628]: E1008 19:51:43.259943 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.260121 kubelet[2628]: E1008 19:51:43.260105 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.260121 kubelet[2628]: W1008 19:51:43.260115 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.260211 kubelet[2628]: E1008 19:51:43.260139 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.260316 kubelet[2628]: E1008 19:51:43.260299 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.260316 kubelet[2628]: W1008 19:51:43.260310 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.260386 kubelet[2628]: E1008 19:51:43.260324 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.260580 kubelet[2628]: E1008 19:51:43.260562 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.260580 kubelet[2628]: W1008 19:51:43.260574 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.260671 kubelet[2628]: E1008 19:51:43.260593 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.260789 kubelet[2628]: E1008 19:51:43.260775 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.260789 kubelet[2628]: W1008 19:51:43.260785 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.260840 kubelet[2628]: E1008 19:51:43.260799 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.261024 kubelet[2628]: E1008 19:51:43.261008 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.261024 kubelet[2628]: W1008 19:51:43.261019 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.261101 kubelet[2628]: E1008 19:51:43.261034 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.261277 kubelet[2628]: E1008 19:51:43.261262 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.261277 kubelet[2628]: W1008 19:51:43.261273 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.261326 kubelet[2628]: E1008 19:51:43.261287 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.261516 kubelet[2628]: E1008 19:51:43.261500 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.261538 kubelet[2628]: W1008 19:51:43.261517 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.261572 kubelet[2628]: E1008 19:51:43.261553 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.261775 kubelet[2628]: E1008 19:51:43.261762 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.261800 kubelet[2628]: W1008 19:51:43.261775 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.261832 kubelet[2628]: E1008 19:51:43.261797 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.262022 kubelet[2628]: E1008 19:51:43.262007 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.262022 kubelet[2628]: W1008 19:51:43.262020 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.262089 kubelet[2628]: E1008 19:51:43.262042 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.262250 kubelet[2628]: E1008 19:51:43.262234 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.262250 kubelet[2628]: W1008 19:51:43.262246 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.262291 kubelet[2628]: E1008 19:51:43.262265 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.262467 kubelet[2628]: E1008 19:51:43.262454 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.262467 kubelet[2628]: W1008 19:51:43.262464 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.262523 kubelet[2628]: E1008 19:51:43.262478 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.262847 kubelet[2628]: E1008 19:51:43.262826 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.262847 kubelet[2628]: W1008 19:51:43.262844 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.262923 kubelet[2628]: E1008 19:51:43.262861 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.361264 kubelet[2628]: E1008 19:51:43.361225 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.361264 kubelet[2628]: W1008 19:51:43.361250 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.361264 kubelet[2628]: E1008 19:51:43.361276 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.361575 kubelet[2628]: E1008 19:51:43.361554 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.361575 kubelet[2628]: W1008 19:51:43.361569 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.361643 kubelet[2628]: E1008 19:51:43.361586 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.463000 kubelet[2628]: E1008 19:51:43.462849 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.463000 kubelet[2628]: W1008 19:51:43.462879 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.463000 kubelet[2628]: E1008 19:51:43.462909 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.463227 kubelet[2628]: E1008 19:51:43.463208 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.463227 kubelet[2628]: W1008 19:51:43.463222 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.463277 kubelet[2628]: E1008 19:51:43.463236 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.563867 kubelet[2628]: E1008 19:51:43.563826 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.563867 kubelet[2628]: W1008 19:51:43.563850 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.563867 kubelet[2628]: E1008 19:51:43.563874 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.564210 kubelet[2628]: E1008 19:51:43.564112 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.564210 kubelet[2628]: W1008 19:51:43.564121 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.564210 kubelet[2628]: E1008 19:51:43.564133 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.567575 kubelet[2628]: E1008 19:51:43.567543 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.567619 kubelet[2628]: W1008 19:51:43.567575 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.567619 kubelet[2628]: E1008 19:51:43.567610 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.664906 kubelet[2628]: E1008 19:51:43.664853 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.664906 kubelet[2628]: W1008 19:51:43.664881 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.664906 kubelet[2628]: E1008 19:51:43.664909 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:43.716987 kubelet[2628]: E1008 19:51:43.716835 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:43.716987 kubelet[2628]: W1008 19:51:43.716861 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:43.716987 kubelet[2628]: E1008 19:51:43.716883 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:44.293667 kubelet[2628]: E1008 19:51:44.293616 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:51:44.440534 containerd[1464]: time="2024-10-08T19:51:44.440392798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:51:44.440534 containerd[1464]: time="2024-10-08T19:51:44.440484901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:51:44.440534 containerd[1464]: time="2024-10-08T19:51:44.440501332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:44.442180 containerd[1464]: time="2024-10-08T19:51:44.440619825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:44.471994 systemd[1]: Started cri-containerd-0caa64be1cb610ef5d6ded95a261562ead7bb1fb482334dc485f32ad5c2f3f6f.scope - libcontainer container 0caa64be1cb610ef5d6ded95a261562ead7bb1fb482334dc485f32ad5c2f3f6f. Oct 8 19:51:44.496265 containerd[1464]: time="2024-10-08T19:51:44.494902495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:51:44.496265 containerd[1464]: time="2024-10-08T19:51:44.495748645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:51:44.496265 containerd[1464]: time="2024-10-08T19:51:44.495761509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:44.496265 containerd[1464]: time="2024-10-08T19:51:44.495854554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:44.530020 systemd[1]: Started cri-containerd-6fddcde53fdd96f36883ab5d0ca71ac0f74ed3fd9ea95021b6a90d51d32736ad.scope - libcontainer container 6fddcde53fdd96f36883ab5d0ca71ac0f74ed3fd9ea95021b6a90d51d32736ad. Oct 8 19:51:44.539246 containerd[1464]: time="2024-10-08T19:51:44.539020933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747fbdf85-6klj9,Uid:a0ab4a8b-e00b-4059-b273-71aef73c6aba,Namespace:calico-system,Attempt:0,} returns sandbox id \"0caa64be1cb610ef5d6ded95a261562ead7bb1fb482334dc485f32ad5c2f3f6f\"" Oct 8 19:51:44.541207 kubelet[2628]: E1008 19:51:44.541167 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:44.543789 containerd[1464]: time="2024-10-08T19:51:44.542804520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:51:44.566835 containerd[1464]: time="2024-10-08T19:51:44.566752506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-thbcg,Uid:22022a3f-d85f-4c82-9933-e8ef37d67416,Namespace:calico-system,Attempt:0,} returns sandbox id \"6fddcde53fdd96f36883ab5d0ca71ac0f74ed3fd9ea95021b6a90d51d32736ad\"" Oct 8 19:51:44.568065 kubelet[2628]: E1008 19:51:44.568031 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:46.295740 kubelet[2628]: E1008 19:51:46.294310 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:51:48.294063 kubelet[2628]: E1008 19:51:48.293999 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:51:48.625251 systemd[1]: Started sshd@9-10.0.0.19:22-10.0.0.1:52336.service - OpenSSH per-connection server daemon (10.0.0.1:52336). Oct 8 19:51:48.702253 sshd[3183]: Accepted publickey for core from 10.0.0.1 port 52336 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:48.704481 sshd[3183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:48.715468 systemd-logind[1442]: New session 10 of user core. Oct 8 19:51:48.723888 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:51:48.801538 containerd[1464]: time="2024-10-08T19:51:48.800658327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:48.897092 sshd[3183]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:48.902575 systemd[1]: sshd@9-10.0.0.19:22-10.0.0.1:52336.service: Deactivated successfully. Oct 8 19:51:48.905021 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:51:48.906518 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:51:48.907627 systemd-logind[1442]: Removed session 10. Oct 8 19:51:49.052132 containerd[1464]: time="2024-10-08T19:51:49.052030675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 19:51:49.100570 containerd[1464]: time="2024-10-08T19:51:49.100467181Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:49.152314 containerd[1464]: time="2024-10-08T19:51:49.152106772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:49.152828 containerd[1464]: time="2024-10-08T19:51:49.152782009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 4.60993553s" Oct 8 19:51:49.152898 containerd[1464]: time="2024-10-08T19:51:49.152829348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 19:51:49.154020 containerd[1464]: time="2024-10-08T19:51:49.153728256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:51:49.163527 containerd[1464]: time="2024-10-08T19:51:49.163436365Z" level=info msg="CreateContainer within sandbox \"0caa64be1cb610ef5d6ded95a261562ead7bb1fb482334dc485f32ad5c2f3f6f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:51:50.294122 kubelet[2628]: E1008 19:51:50.294052 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:51:50.357181 containerd[1464]: time="2024-10-08T19:51:50.357096790Z" level=info msg="CreateContainer within sandbox \"0caa64be1cb610ef5d6ded95a261562ead7bb1fb482334dc485f32ad5c2f3f6f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"30621f48696767d9070ac89159ad1ab74e37d212b56cd39aa85970879aa5386d\"" Oct 8 19:51:50.357804 containerd[1464]: time="2024-10-08T19:51:50.357739116Z" level=info msg="StartContainer for \"30621f48696767d9070ac89159ad1ab74e37d212b56cd39aa85970879aa5386d\"" Oct 8 19:51:50.402063 systemd[1]: Started cri-containerd-30621f48696767d9070ac89159ad1ab74e37d212b56cd39aa85970879aa5386d.scope - libcontainer container 30621f48696767d9070ac89159ad1ab74e37d212b56cd39aa85970879aa5386d. Oct 8 19:51:50.954884 containerd[1464]: time="2024-10-08T19:51:50.954813873Z" level=info msg="StartContainer for \"30621f48696767d9070ac89159ad1ab74e37d212b56cd39aa85970879aa5386d\" returns successfully" Oct 8 19:51:51.374841 kubelet[2628]: E1008 19:51:51.374805 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:51.407880 kubelet[2628]: E1008 19:51:51.407839 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.407880 kubelet[2628]: W1008 19:51:51.407866 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.407880 kubelet[2628]: E1008 19:51:51.407891 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.408243 kubelet[2628]: E1008 19:51:51.408182 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.408243 kubelet[2628]: W1008 19:51:51.408194 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.408243 kubelet[2628]: E1008 19:51:51.408211 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.408489 kubelet[2628]: E1008 19:51:51.408473 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.408489 kubelet[2628]: W1008 19:51:51.408487 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.408576 kubelet[2628]: E1008 19:51:51.408502 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.408790 kubelet[2628]: E1008 19:51:51.408767 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.408790 kubelet[2628]: W1008 19:51:51.408783 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.408902 kubelet[2628]: E1008 19:51:51.408801 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.409098 kubelet[2628]: E1008 19:51:51.409072 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.409098 kubelet[2628]: W1008 19:51:51.409085 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.409098 kubelet[2628]: E1008 19:51:51.409100 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.409351 kubelet[2628]: E1008 19:51:51.409333 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.409351 kubelet[2628]: W1008 19:51:51.409345 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.409433 kubelet[2628]: E1008 19:51:51.409360 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.409614 kubelet[2628]: E1008 19:51:51.409597 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.409614 kubelet[2628]: W1008 19:51:51.409610 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.409719 kubelet[2628]: E1008 19:51:51.409624 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.409926 kubelet[2628]: E1008 19:51:51.409894 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.409926 kubelet[2628]: W1008 19:51:51.409910 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.409926 kubelet[2628]: E1008 19:51:51.409926 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.410202 kubelet[2628]: E1008 19:51:51.410184 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.410202 kubelet[2628]: W1008 19:51:51.410199 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.410287 kubelet[2628]: E1008 19:51:51.410214 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.410439 kubelet[2628]: E1008 19:51:51.410421 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.410439 kubelet[2628]: W1008 19:51:51.410434 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.410512 kubelet[2628]: E1008 19:51:51.410448 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.410714 kubelet[2628]: E1008 19:51:51.410683 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.410714 kubelet[2628]: W1008 19:51:51.410713 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.410817 kubelet[2628]: E1008 19:51:51.410729 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.410978 kubelet[2628]: E1008 19:51:51.410955 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.410978 kubelet[2628]: W1008 19:51:51.410968 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.411064 kubelet[2628]: E1008 19:51:51.410982 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.411241 kubelet[2628]: E1008 19:51:51.411211 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.411241 kubelet[2628]: W1008 19:51:51.411223 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.411241 kubelet[2628]: E1008 19:51:51.411235 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.411571 kubelet[2628]: E1008 19:51:51.411550 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.411571 kubelet[2628]: W1008 19:51:51.411562 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.411626 kubelet[2628]: E1008 19:51:51.411573 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.411818 kubelet[2628]: E1008 19:51:51.411796 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.411818 kubelet[2628]: W1008 19:51:51.411807 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.411818 kubelet[2628]: E1008 19:51:51.411818 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.425393 kubelet[2628]: E1008 19:51:51.425345 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.425393 kubelet[2628]: W1008 19:51:51.425372 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.425393 kubelet[2628]: E1008 19:51:51.425396 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.425674 kubelet[2628]: E1008 19:51:51.425649 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.425674 kubelet[2628]: W1008 19:51:51.425661 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.425754 kubelet[2628]: E1008 19:51:51.425678 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.426088 kubelet[2628]: E1008 19:51:51.426056 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.426088 kubelet[2628]: W1008 19:51:51.426079 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.426147 kubelet[2628]: E1008 19:51:51.426108 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.426351 kubelet[2628]: E1008 19:51:51.426332 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.426351 kubelet[2628]: W1008 19:51:51.426345 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.426408 kubelet[2628]: E1008 19:51:51.426364 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.426598 kubelet[2628]: E1008 19:51:51.426578 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.426598 kubelet[2628]: W1008 19:51:51.426591 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.426653 kubelet[2628]: E1008 19:51:51.426607 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.426872 kubelet[2628]: E1008 19:51:51.426854 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.426872 kubelet[2628]: W1008 19:51:51.426869 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.426974 kubelet[2628]: E1008 19:51:51.426889 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.427151 kubelet[2628]: E1008 19:51:51.427137 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.427151 kubelet[2628]: W1008 19:51:51.427148 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.427228 kubelet[2628]: E1008 19:51:51.427185 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.427388 kubelet[2628]: E1008 19:51:51.427364 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.427388 kubelet[2628]: W1008 19:51:51.427376 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.427464 kubelet[2628]: E1008 19:51:51.427402 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.427570 kubelet[2628]: E1008 19:51:51.427556 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.427570 kubelet[2628]: W1008 19:51:51.427566 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.427638 kubelet[2628]: E1008 19:51:51.427602 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.427826 kubelet[2628]: E1008 19:51:51.427811 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.427826 kubelet[2628]: W1008 19:51:51.427823 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.427908 kubelet[2628]: E1008 19:51:51.427839 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.428100 kubelet[2628]: E1008 19:51:51.428080 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.428100 kubelet[2628]: W1008 19:51:51.428093 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.428183 kubelet[2628]: E1008 19:51:51.428111 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.428328 kubelet[2628]: E1008 19:51:51.428310 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.428328 kubelet[2628]: W1008 19:51:51.428321 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.428391 kubelet[2628]: E1008 19:51:51.428335 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.428595 kubelet[2628]: E1008 19:51:51.428576 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.428595 kubelet[2628]: W1008 19:51:51.428587 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.428665 kubelet[2628]: E1008 19:51:51.428605 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.428992 kubelet[2628]: E1008 19:51:51.428973 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.428992 kubelet[2628]: W1008 19:51:51.428987 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.429085 kubelet[2628]: E1008 19:51:51.429012 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.429314 kubelet[2628]: E1008 19:51:51.429294 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.429314 kubelet[2628]: W1008 19:51:51.429306 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.429405 kubelet[2628]: E1008 19:51:51.429324 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.429636 kubelet[2628]: E1008 19:51:51.429618 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.429636 kubelet[2628]: W1008 19:51:51.429633 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.429743 kubelet[2628]: E1008 19:51:51.429655 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.429938 kubelet[2628]: E1008 19:51:51.429921 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.429938 kubelet[2628]: W1008 19:51:51.429937 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.429999 kubelet[2628]: E1008 19:51:51.429958 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.473995 kubelet[2628]: E1008 19:51:51.473930 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:51.473995 kubelet[2628]: W1008 19:51:51.473968 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:51.473995 kubelet[2628]: E1008 19:51:51.473996 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:51.639913 kubelet[2628]: I1008 19:51:51.639725 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-747fbdf85-6klj9" podStartSLOduration=5.02893974 podStartE2EDuration="9.63965229s" podCreationTimestamp="2024-10-08 19:51:42 +0000 UTC" firstStartedPulling="2024-10-08 19:51:44.542562966 +0000 UTC m=+26.356381474" lastFinishedPulling="2024-10-08 19:51:49.153275516 +0000 UTC m=+30.967094024" observedRunningTime="2024-10-08 19:51:51.639117917 +0000 UTC m=+33.452936425" watchObservedRunningTime="2024-10-08 19:51:51.63965229 +0000 UTC m=+33.453470798" Oct 8 19:51:52.293778 kubelet[2628]: E1008 19:51:52.293686 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:51:52.376869 kubelet[2628]: I1008 19:51:52.376829 2628 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:51:52.377563 kubelet[2628]: E1008 19:51:52.377544 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:52.422319 kubelet[2628]: E1008 19:51:52.422093 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.422319 kubelet[2628]: W1008 19:51:52.422126 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.422319 kubelet[2628]: E1008 19:51:52.422156 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.422950 kubelet[2628]: E1008 19:51:52.422685 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.422950 kubelet[2628]: W1008 19:51:52.422782 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.422950 kubelet[2628]: E1008 19:51:52.422799 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.425308 kubelet[2628]: E1008 19:51:52.425256 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.425308 kubelet[2628]: W1008 19:51:52.425299 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.425418 kubelet[2628]: E1008 19:51:52.425334 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.426333 kubelet[2628]: E1008 19:51:52.426277 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.426333 kubelet[2628]: W1008 19:51:52.426319 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.426402 kubelet[2628]: E1008 19:51:52.426357 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.426776 kubelet[2628]: E1008 19:51:52.426753 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.426776 kubelet[2628]: W1008 19:51:52.426769 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.426858 kubelet[2628]: E1008 19:51:52.426782 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.427119 kubelet[2628]: E1008 19:51:52.427099 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.427119 kubelet[2628]: W1008 19:51:52.427117 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.427174 kubelet[2628]: E1008 19:51:52.427132 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.427457 kubelet[2628]: E1008 19:51:52.427431 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.427457 kubelet[2628]: W1008 19:51:52.427446 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.427457 kubelet[2628]: E1008 19:51:52.427459 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.427765 kubelet[2628]: E1008 19:51:52.427747 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.427765 kubelet[2628]: W1008 19:51:52.427762 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.427765 kubelet[2628]: E1008 19:51:52.427776 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.428068 kubelet[2628]: E1008 19:51:52.428046 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.428068 kubelet[2628]: W1008 19:51:52.428066 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.428138 kubelet[2628]: E1008 19:51:52.428088 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.428447 kubelet[2628]: E1008 19:51:52.428425 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.428447 kubelet[2628]: W1008 19:51:52.428438 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.428512 kubelet[2628]: E1008 19:51:52.428452 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.428745 kubelet[2628]: E1008 19:51:52.428720 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.428745 kubelet[2628]: W1008 19:51:52.428734 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.428745 kubelet[2628]: E1008 19:51:52.428747 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.429033 kubelet[2628]: E1008 19:51:52.429007 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.429033 kubelet[2628]: W1008 19:51:52.429022 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.429033 kubelet[2628]: E1008 19:51:52.429034 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.429298 kubelet[2628]: E1008 19:51:52.429267 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.429298 kubelet[2628]: W1008 19:51:52.429280 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.429298 kubelet[2628]: E1008 19:51:52.429293 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.429526 kubelet[2628]: E1008 19:51:52.429499 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.429526 kubelet[2628]: W1008 19:51:52.429512 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.429526 kubelet[2628]: E1008 19:51:52.429524 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.429802 kubelet[2628]: E1008 19:51:52.429786 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.429802 kubelet[2628]: W1008 19:51:52.429798 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.429884 kubelet[2628]: E1008 19:51:52.429811 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.432331 kubelet[2628]: E1008 19:51:52.432110 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.432331 kubelet[2628]: W1008 19:51:52.432127 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.432331 kubelet[2628]: E1008 19:51:52.432141 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.432582 kubelet[2628]: E1008 19:51:52.432477 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.432582 kubelet[2628]: W1008 19:51:52.432498 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.432582 kubelet[2628]: E1008 19:51:52.432531 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.432960 kubelet[2628]: E1008 19:51:52.432902 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.432960 kubelet[2628]: W1008 19:51:52.432935 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.432960 kubelet[2628]: E1008 19:51:52.432956 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.433373 kubelet[2628]: E1008 19:51:52.433280 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.433373 kubelet[2628]: W1008 19:51:52.433298 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.433373 kubelet[2628]: E1008 19:51:52.433321 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.433622 kubelet[2628]: E1008 19:51:52.433571 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.433622 kubelet[2628]: W1008 19:51:52.433581 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.433665 kubelet[2628]: E1008 19:51:52.433622 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.433844 kubelet[2628]: E1008 19:51:52.433828 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.433844 kubelet[2628]: W1008 19:51:52.433841 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.434056 kubelet[2628]: E1008 19:51:52.433879 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.434082 kubelet[2628]: E1008 19:51:52.434071 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.434104 kubelet[2628]: W1008 19:51:52.434081 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.434147 kubelet[2628]: E1008 19:51:52.434126 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.434325 kubelet[2628]: E1008 19:51:52.434296 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.434325 kubelet[2628]: W1008 19:51:52.434310 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.434325 kubelet[2628]: E1008 19:51:52.434329 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.434592 kubelet[2628]: E1008 19:51:52.434573 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.434592 kubelet[2628]: W1008 19:51:52.434585 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.434667 kubelet[2628]: E1008 19:51:52.434603 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.435130 kubelet[2628]: E1008 19:51:52.435103 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.435130 kubelet[2628]: W1008 19:51:52.435122 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.435210 kubelet[2628]: E1008 19:51:52.435150 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.435436 kubelet[2628]: E1008 19:51:52.435415 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.435436 kubelet[2628]: W1008 19:51:52.435428 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.435529 kubelet[2628]: E1008 19:51:52.435470 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.435732 kubelet[2628]: E1008 19:51:52.435711 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.435732 kubelet[2628]: W1008 19:51:52.435725 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.435822 kubelet[2628]: E1008 19:51:52.435763 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.435988 kubelet[2628]: E1008 19:51:52.435970 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.435988 kubelet[2628]: W1008 19:51:52.435984 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.436048 kubelet[2628]: E1008 19:51:52.436005 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.436313 kubelet[2628]: E1008 19:51:52.436271 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.436313 kubelet[2628]: W1008 19:51:52.436289 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.436313 kubelet[2628]: E1008 19:51:52.436313 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.436780 kubelet[2628]: E1008 19:51:52.436740 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.436780 kubelet[2628]: W1008 19:51:52.436760 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.436780 kubelet[2628]: E1008 19:51:52.436774 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.437043 kubelet[2628]: E1008 19:51:52.437028 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.437043 kubelet[2628]: W1008 19:51:52.437042 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.437119 kubelet[2628]: E1008 19:51:52.437063 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.437352 kubelet[2628]: E1008 19:51:52.437320 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.437352 kubelet[2628]: W1008 19:51:52.437337 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.437427 kubelet[2628]: E1008 19:51:52.437377 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:52.437868 kubelet[2628]: E1008 19:51:52.437839 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:51:52.437868 kubelet[2628]: W1008 19:51:52.437854 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:51:52.437868 kubelet[2628]: E1008 19:51:52.437867 2628 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:51:53.319804 containerd[1464]: time="2024-10-08T19:51:53.319639807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:53.345400 containerd[1464]: time="2024-10-08T19:51:53.345314471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 19:51:53.394331 containerd[1464]: time="2024-10-08T19:51:53.394227918Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:53.424769 containerd[1464]: time="2024-10-08T19:51:53.424645025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:53.425476 containerd[1464]: time="2024-10-08T19:51:53.425416623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 4.271638924s" Oct 8 19:51:53.425476 containerd[1464]: time="2024-10-08T19:51:53.425471345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 19:51:53.427956 containerd[1464]: time="2024-10-08T19:51:53.427913309Z" level=info msg="CreateContainer within sandbox \"6fddcde53fdd96f36883ab5d0ca71ac0f74ed3fd9ea95021b6a90d51d32736ad\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:51:53.909322 systemd[1]: Started sshd@10-10.0.0.19:22-10.0.0.1:43832.service - OpenSSH per-connection server daemon (10.0.0.1:43832). Oct 8 19:51:53.966042 sshd[3360]: Accepted publickey for core from 10.0.0.1 port 43832 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:53.967872 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:53.972946 systemd-logind[1442]: New session 11 of user core. Oct 8 19:51:53.986902 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:51:54.100779 sshd[3360]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:54.105657 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:43832.service: Deactivated successfully. Oct 8 19:51:54.107784 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:51:54.108504 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:51:54.109525 systemd-logind[1442]: Removed session 11. Oct 8 19:51:54.136748 containerd[1464]: time="2024-10-08T19:51:54.136667882Z" level=info msg="CreateContainer within sandbox \"6fddcde53fdd96f36883ab5d0ca71ac0f74ed3fd9ea95021b6a90d51d32736ad\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1dfb505be068e42c0dbb11fb04e7269164247e1bf437bb429e1de50a06a2da21\"" Oct 8 19:51:54.137345 containerd[1464]: time="2024-10-08T19:51:54.137307583Z" level=info msg="StartContainer for \"1dfb505be068e42c0dbb11fb04e7269164247e1bf437bb429e1de50a06a2da21\"" Oct 8 19:51:54.170953 systemd[1]: Started cri-containerd-1dfb505be068e42c0dbb11fb04e7269164247e1bf437bb429e1de50a06a2da21.scope - libcontainer container 1dfb505be068e42c0dbb11fb04e7269164247e1bf437bb429e1de50a06a2da21. Oct 8 19:51:54.217869 systemd[1]: cri-containerd-1dfb505be068e42c0dbb11fb04e7269164247e1bf437bb429e1de50a06a2da21.scope: Deactivated successfully. Oct 8 19:51:54.294017 kubelet[2628]: E1008 19:51:54.293925 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:51:54.323080 containerd[1464]: time="2024-10-08T19:51:54.322939797Z" level=info msg="StartContainer for \"1dfb505be068e42c0dbb11fb04e7269164247e1bf437bb429e1de50a06a2da21\" returns successfully" Oct 8 19:51:54.347315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dfb505be068e42c0dbb11fb04e7269164247e1bf437bb429e1de50a06a2da21-rootfs.mount: Deactivated successfully. Oct 8 19:51:54.382166 kubelet[2628]: E1008 19:51:54.382134 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:54.699608 containerd[1464]: time="2024-10-08T19:51:54.699510665Z" level=info msg="shim disconnected" id=1dfb505be068e42c0dbb11fb04e7269164247e1bf437bb429e1de50a06a2da21 namespace=k8s.io Oct 8 19:51:54.699608 containerd[1464]: time="2024-10-08T19:51:54.699600433Z" level=warning msg="cleaning up after shim disconnected" id=1dfb505be068e42c0dbb11fb04e7269164247e1bf437bb429e1de50a06a2da21 namespace=k8s.io Oct 8 19:51:54.699608 containerd[1464]: time="2024-10-08T19:51:54.699617605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:51:55.386110 kubelet[2628]: E1008 19:51:55.386064 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:55.386748 containerd[1464]: time="2024-10-08T19:51:55.386717410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:51:56.293770 kubelet[2628]: E1008 19:51:56.293725 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:51:58.293942 kubelet[2628]: E1008 19:51:58.293891 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:51:59.112031 systemd[1]: Started sshd@11-10.0.0.19:22-10.0.0.1:43834.service - OpenSSH per-connection server daemon (10.0.0.1:43834). Oct 8 19:51:59.371568 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 43834 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:59.373283 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:59.377582 systemd-logind[1442]: New session 12 of user core. Oct 8 19:51:59.386844 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:51:59.497720 sshd[3444]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:59.502411 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:43834.service: Deactivated successfully. Oct 8 19:51:59.504924 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:51:59.505613 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:51:59.506771 systemd-logind[1442]: Removed session 12. Oct 8 19:52:00.294420 kubelet[2628]: E1008 19:52:00.294352 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:52:02.296119 kubelet[2628]: E1008 19:52:02.296047 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:52:03.276681 containerd[1464]: time="2024-10-08T19:52:03.275362590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:03.277431 containerd[1464]: time="2024-10-08T19:52:03.277124466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 19:52:03.279246 containerd[1464]: time="2024-10-08T19:52:03.279170605Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:03.282494 containerd[1464]: time="2024-10-08T19:52:03.282340203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:03.283415 containerd[1464]: time="2024-10-08T19:52:03.283167584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 7.896396574s" Oct 8 19:52:03.283415 containerd[1464]: time="2024-10-08T19:52:03.283230453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 19:52:03.286614 containerd[1464]: time="2024-10-08T19:52:03.286551544Z" level=info msg="CreateContainer within sandbox \"6fddcde53fdd96f36883ab5d0ca71ac0f74ed3fd9ea95021b6a90d51d32736ad\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:52:03.324847 containerd[1464]: time="2024-10-08T19:52:03.324533779Z" level=info msg="CreateContainer within sandbox \"6fddcde53fdd96f36883ab5d0ca71ac0f74ed3fd9ea95021b6a90d51d32736ad\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a9576ae4e1a00c7459b3f8e3e54b60bbca6cd978e673fd2a559c2b6d7639951c\"" Oct 8 19:52:03.325907 containerd[1464]: time="2024-10-08T19:52:03.325590001Z" level=info msg="StartContainer for \"a9576ae4e1a00c7459b3f8e3e54b60bbca6cd978e673fd2a559c2b6d7639951c\"" Oct 8 19:52:03.367180 systemd[1]: Started cri-containerd-a9576ae4e1a00c7459b3f8e3e54b60bbca6cd978e673fd2a559c2b6d7639951c.scope - libcontainer container a9576ae4e1a00c7459b3f8e3e54b60bbca6cd978e673fd2a559c2b6d7639951c. Oct 8 19:52:03.471184 containerd[1464]: time="2024-10-08T19:52:03.471114787Z" level=info msg="StartContainer for \"a9576ae4e1a00c7459b3f8e3e54b60bbca6cd978e673fd2a559c2b6d7639951c\" returns successfully" Oct 8 19:52:04.293964 kubelet[2628]: E1008 19:52:04.293904 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:52:04.409570 kubelet[2628]: E1008 19:52:04.409519 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:04.513394 systemd[1]: Started sshd@12-10.0.0.19:22-10.0.0.1:43998.service - OpenSSH per-connection server daemon (10.0.0.1:43998). Oct 8 19:52:04.553780 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 43998 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:04.556017 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:04.610690 systemd-logind[1442]: New session 13 of user core. Oct 8 19:52:04.617858 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:52:05.411259 kubelet[2628]: E1008 19:52:05.411220 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:06.294153 kubelet[2628]: E1008 19:52:06.294083 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:52:07.185062 sshd[3503]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:07.189926 systemd[1]: sshd@12-10.0.0.19:22-10.0.0.1:43998.service: Deactivated successfully. Oct 8 19:52:07.192513 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:52:07.193288 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:52:07.194530 systemd-logind[1442]: Removed session 13. Oct 8 19:52:08.293597 kubelet[2628]: E1008 19:52:08.293546 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:52:09.666508 systemd[1]: cri-containerd-a9576ae4e1a00c7459b3f8e3e54b60bbca6cd978e673fd2a559c2b6d7639951c.scope: Deactivated successfully. Oct 8 19:52:09.693537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9576ae4e1a00c7459b3f8e3e54b60bbca6cd978e673fd2a559c2b6d7639951c-rootfs.mount: Deactivated successfully. Oct 8 19:52:09.711332 containerd[1464]: time="2024-10-08T19:52:09.711210099Z" level=info msg="shim disconnected" id=a9576ae4e1a00c7459b3f8e3e54b60bbca6cd978e673fd2a559c2b6d7639951c namespace=k8s.io Oct 8 19:52:09.711332 containerd[1464]: time="2024-10-08T19:52:09.711302358Z" level=warning msg="cleaning up after shim disconnected" id=a9576ae4e1a00c7459b3f8e3e54b60bbca6cd978e673fd2a559c2b6d7639951c namespace=k8s.io Oct 8 19:52:09.711332 containerd[1464]: time="2024-10-08T19:52:09.711315333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:52:09.721031 kubelet[2628]: I1008 19:52:09.720974 2628 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:52:09.766603 kubelet[2628]: I1008 19:52:09.766458 2628 topology_manager.go:215] "Topology Admit Handler" podUID="d890c593-8733-4509-ba00-18cbdb137a3b" podNamespace="kube-system" podName="coredns-76f75df574-qlplh" Oct 8 19:52:09.771538 kubelet[2628]: I1008 19:52:09.771050 2628 topology_manager.go:215] "Topology Admit Handler" podUID="899a83bf-2f3f-42fc-8f12-c8d235d4f83d" podNamespace="kube-system" podName="coredns-76f75df574-l746d" Oct 8 19:52:09.771538 kubelet[2628]: I1008 19:52:09.771156 2628 topology_manager.go:215] "Topology Admit Handler" podUID="779c09a7-b1aa-448c-b504-3cddbdcbc6af" podNamespace="calico-system" podName="calico-kube-controllers-78df779756-sx78s" Oct 8 19:52:09.781143 systemd[1]: Created slice kubepods-burstable-podd890c593_8733_4509_ba00_18cbdb137a3b.slice - libcontainer container kubepods-burstable-podd890c593_8733_4509_ba00_18cbdb137a3b.slice. Oct 8 19:52:09.787173 systemd[1]: Created slice kubepods-burstable-pod899a83bf_2f3f_42fc_8f12_c8d235d4f83d.slice - libcontainer container kubepods-burstable-pod899a83bf_2f3f_42fc_8f12_c8d235d4f83d.slice. Oct 8 19:52:09.799084 systemd[1]: Created slice kubepods-besteffort-pod779c09a7_b1aa_448c_b504_3cddbdcbc6af.slice - libcontainer container kubepods-besteffort-pod779c09a7_b1aa_448c_b504_3cddbdcbc6af.slice. Oct 8 19:52:09.848991 kubelet[2628]: I1008 19:52:09.848926 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/899a83bf-2f3f-42fc-8f12-c8d235d4f83d-config-volume\") pod \"coredns-76f75df574-l746d\" (UID: \"899a83bf-2f3f-42fc-8f12-c8d235d4f83d\") " pod="kube-system/coredns-76f75df574-l746d" Oct 8 19:52:09.848991 kubelet[2628]: I1008 19:52:09.848993 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d890c593-8733-4509-ba00-18cbdb137a3b-config-volume\") pod \"coredns-76f75df574-qlplh\" (UID: \"d890c593-8733-4509-ba00-18cbdb137a3b\") " pod="kube-system/coredns-76f75df574-qlplh" Oct 8 19:52:09.848991 kubelet[2628]: I1008 19:52:09.849025 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8b2n\" (UniqueName: \"kubernetes.io/projected/779c09a7-b1aa-448c-b504-3cddbdcbc6af-kube-api-access-w8b2n\") pod \"calico-kube-controllers-78df779756-sx78s\" (UID: \"779c09a7-b1aa-448c-b504-3cddbdcbc6af\") " pod="calico-system/calico-kube-controllers-78df779756-sx78s" Oct 8 19:52:09.849311 kubelet[2628]: I1008 19:52:09.849098 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4tv\" (UniqueName: \"kubernetes.io/projected/899a83bf-2f3f-42fc-8f12-c8d235d4f83d-kube-api-access-7w4tv\") pod \"coredns-76f75df574-l746d\" (UID: \"899a83bf-2f3f-42fc-8f12-c8d235d4f83d\") " pod="kube-system/coredns-76f75df574-l746d" Oct 8 19:52:09.849347 kubelet[2628]: I1008 19:52:09.849283 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l87cl\" (UniqueName: \"kubernetes.io/projected/d890c593-8733-4509-ba00-18cbdb137a3b-kube-api-access-l87cl\") pod \"coredns-76f75df574-qlplh\" (UID: \"d890c593-8733-4509-ba00-18cbdb137a3b\") " pod="kube-system/coredns-76f75df574-qlplh" Oct 8 19:52:09.849515 kubelet[2628]: I1008 19:52:09.849386 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/779c09a7-b1aa-448c-b504-3cddbdcbc6af-tigera-ca-bundle\") pod \"calico-kube-controllers-78df779756-sx78s\" (UID: \"779c09a7-b1aa-448c-b504-3cddbdcbc6af\") " pod="calico-system/calico-kube-controllers-78df779756-sx78s" Oct 8 19:52:10.089786 kubelet[2628]: E1008 19:52:10.089020 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:10.092005 containerd[1464]: time="2024-10-08T19:52:10.090270957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qlplh,Uid:d890c593-8733-4509-ba00-18cbdb137a3b,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:10.094056 kubelet[2628]: E1008 19:52:10.094029 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:10.095609 containerd[1464]: time="2024-10-08T19:52:10.095333589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l746d,Uid:899a83bf-2f3f-42fc-8f12-c8d235d4f83d,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:10.105337 containerd[1464]: time="2024-10-08T19:52:10.105267641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78df779756-sx78s,Uid:779c09a7-b1aa-448c-b504-3cddbdcbc6af,Namespace:calico-system,Attempt:0,}" Oct 8 19:52:10.301441 systemd[1]: Created slice kubepods-besteffort-podae7adb50_443a_4488_8328_041f1c3fd2cd.slice - libcontainer container kubepods-besteffort-podae7adb50_443a_4488_8328_041f1c3fd2cd.slice. Oct 8 19:52:10.304374 containerd[1464]: time="2024-10-08T19:52:10.304314380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88gsg,Uid:ae7adb50-443a-4488-8328-041f1c3fd2cd,Namespace:calico-system,Attempt:0,}" Oct 8 19:52:10.422499 kubelet[2628]: E1008 19:52:10.422455 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:10.423264 containerd[1464]: time="2024-10-08T19:52:10.423231741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:52:12.197080 systemd[1]: Started sshd@13-10.0.0.19:22-10.0.0.1:35468.service - OpenSSH per-connection server daemon (10.0.0.1:35468). Oct 8 19:52:12.256811 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 35468 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:12.258613 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:12.334656 systemd-logind[1442]: New session 14 of user core. Oct 8 19:52:12.349895 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:52:12.362916 kubelet[2628]: I1008 19:52:12.362883 2628 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:52:12.503160 kubelet[2628]: E1008 19:52:12.503031 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:12.523233 sshd[3564]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:12.527130 systemd[1]: sshd@13-10.0.0.19:22-10.0.0.1:35468.service: Deactivated successfully. Oct 8 19:52:12.529246 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:52:12.529943 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:52:12.530958 systemd-logind[1442]: Removed session 14. Oct 8 19:52:13.428410 kubelet[2628]: E1008 19:52:13.428354 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:13.889216 containerd[1464]: time="2024-10-08T19:52:13.889147465Z" level=error msg="Failed to destroy network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:13.889749 containerd[1464]: time="2024-10-08T19:52:13.889599877Z" level=error msg="encountered an error cleaning up failed sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:13.889749 containerd[1464]: time="2024-10-08T19:52:13.889661215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qlplh,Uid:d890c593-8733-4509-ba00-18cbdb137a3b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:13.890068 kubelet[2628]: E1008 19:52:13.890034 2628 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:13.890166 kubelet[2628]: E1008 19:52:13.890112 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qlplh" Oct 8 19:52:13.890166 kubelet[2628]: E1008 19:52:13.890132 2628 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qlplh" Oct 8 19:52:13.890220 kubelet[2628]: E1008 19:52:13.890187 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qlplh_kube-system(d890c593-8733-4509-ba00-18cbdb137a3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qlplh_kube-system(d890c593-8733-4509-ba00-18cbdb137a3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qlplh" podUID="d890c593-8733-4509-ba00-18cbdb137a3b" Oct 8 19:52:13.891809 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f-shm.mount: Deactivated successfully. Oct 8 19:52:14.431475 kubelet[2628]: I1008 19:52:14.431428 2628 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:52:14.433067 containerd[1464]: time="2024-10-08T19:52:14.433013239Z" level=info msg="StopPodSandbox for \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\"" Oct 8 19:52:14.433281 containerd[1464]: time="2024-10-08T19:52:14.433234816Z" level=info msg="Ensure that sandbox e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f in task-service has been cleanup successfully" Oct 8 19:52:14.464139 containerd[1464]: time="2024-10-08T19:52:14.464078240Z" level=error msg="StopPodSandbox for \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\" failed" error="failed to destroy network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:14.464729 kubelet[2628]: E1008 19:52:14.464475 2628 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:52:14.464729 kubelet[2628]: E1008 19:52:14.464571 2628 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f"} Oct 8 19:52:14.464729 kubelet[2628]: E1008 19:52:14.464604 2628 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d890c593-8733-4509-ba00-18cbdb137a3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:52:14.464729 kubelet[2628]: E1008 19:52:14.464658 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d890c593-8733-4509-ba00-18cbdb137a3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qlplh" podUID="d890c593-8733-4509-ba00-18cbdb137a3b" Oct 8 19:52:14.807213 containerd[1464]: time="2024-10-08T19:52:14.807066155Z" level=error msg="Failed to destroy network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:14.807672 containerd[1464]: time="2024-10-08T19:52:14.807619529Z" level=error msg="encountered an error cleaning up failed sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:14.807758 containerd[1464]: time="2024-10-08T19:52:14.807728409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l746d,Uid:899a83bf-2f3f-42fc-8f12-c8d235d4f83d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:14.809682 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f-shm.mount: Deactivated successfully. Oct 8 19:52:14.809915 kubelet[2628]: E1008 19:52:14.809839 2628 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:14.809997 kubelet[2628]: E1008 19:52:14.809940 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l746d" Oct 8 19:52:14.809997 kubelet[2628]: E1008 19:52:14.809962 2628 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l746d" Oct 8 19:52:14.810151 kubelet[2628]: E1008 19:52:14.810028 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-l746d_kube-system(899a83bf-2f3f-42fc-8f12-c8d235d4f83d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-l746d_kube-system(899a83bf-2f3f-42fc-8f12-c8d235d4f83d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l746d" podUID="899a83bf-2f3f-42fc-8f12-c8d235d4f83d" Oct 8 19:52:14.964356 containerd[1464]: time="2024-10-08T19:52:14.964288562Z" level=error msg="Failed to destroy network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:14.964897 containerd[1464]: time="2024-10-08T19:52:14.964808543Z" level=error msg="encountered an error cleaning up failed sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:14.964897 containerd[1464]: time="2024-10-08T19:52:14.964871144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78df779756-sx78s,Uid:779c09a7-b1aa-448c-b504-3cddbdcbc6af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:14.965191 kubelet[2628]: E1008 19:52:14.965152 2628 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:14.965263 kubelet[2628]: E1008 19:52:14.965224 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78df779756-sx78s" Oct 8 19:52:14.965263 kubelet[2628]: E1008 19:52:14.965252 2628 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78df779756-sx78s" Oct 8 19:52:14.965344 kubelet[2628]: E1008 19:52:14.965329 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78df779756-sx78s_calico-system(779c09a7-b1aa-448c-b504-3cddbdcbc6af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78df779756-sx78s_calico-system(779c09a7-b1aa-448c-b504-3cddbdcbc6af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78df779756-sx78s" podUID="779c09a7-b1aa-448c-b504-3cddbdcbc6af" Oct 8 19:52:14.966919 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699-shm.mount: Deactivated successfully. Oct 8 19:52:15.063678 containerd[1464]: time="2024-10-08T19:52:15.063526944Z" level=error msg="Failed to destroy network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:15.064130 containerd[1464]: time="2024-10-08T19:52:15.063965948Z" level=error msg="encountered an error cleaning up failed sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:15.064130 containerd[1464]: time="2024-10-08T19:52:15.064017838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88gsg,Uid:ae7adb50-443a-4488-8328-041f1c3fd2cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:15.064753 kubelet[2628]: E1008 19:52:15.064728 2628 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:15.064832 kubelet[2628]: E1008 19:52:15.064787 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88gsg" Oct 8 19:52:15.064832 kubelet[2628]: E1008 19:52:15.064814 2628 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88gsg" Oct 8 19:52:15.064943 kubelet[2628]: E1008 19:52:15.064893 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88gsg_calico-system(ae7adb50-443a-4488-8328-041f1c3fd2cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88gsg_calico-system(ae7adb50-443a-4488-8328-041f1c3fd2cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:52:15.066060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b-shm.mount: Deactivated successfully. Oct 8 19:52:15.434348 kubelet[2628]: I1008 19:52:15.434303 2628 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:52:15.436633 containerd[1464]: time="2024-10-08T19:52:15.436073822Z" level=info msg="StopPodSandbox for \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\"" Oct 8 19:52:15.436633 containerd[1464]: time="2024-10-08T19:52:15.436304886Z" level=info msg="Ensure that sandbox a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f in task-service has been cleanup successfully" Oct 8 19:52:15.437852 kubelet[2628]: I1008 19:52:15.437822 2628 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:52:15.439368 containerd[1464]: time="2024-10-08T19:52:15.439323882Z" level=info msg="StopPodSandbox for \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\"" Oct 8 19:52:15.439536 containerd[1464]: time="2024-10-08T19:52:15.439509059Z" level=info msg="Ensure that sandbox 04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b in task-service has been cleanup successfully" Oct 8 19:52:15.441671 kubelet[2628]: I1008 19:52:15.440748 2628 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:52:15.442460 containerd[1464]: time="2024-10-08T19:52:15.442396872Z" level=info msg="StopPodSandbox for \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\"" Oct 8 19:52:15.442683 containerd[1464]: time="2024-10-08T19:52:15.442660959Z" level=info msg="Ensure that sandbox f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699 in task-service has been cleanup successfully" Oct 8 19:52:15.473311 containerd[1464]: time="2024-10-08T19:52:15.473254193Z" level=error msg="StopPodSandbox for \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\" failed" error="failed to destroy network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:15.473455 containerd[1464]: time="2024-10-08T19:52:15.473254363Z" level=error msg="StopPodSandbox for \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\" failed" error="failed to destroy network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:15.473605 kubelet[2628]: E1008 19:52:15.473577 2628 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:52:15.473685 kubelet[2628]: E1008 19:52:15.473666 2628 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699"} Oct 8 19:52:15.473685 kubelet[2628]: E1008 19:52:15.473577 2628 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:52:15.473782 kubelet[2628]: E1008 19:52:15.473725 2628 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f"} Oct 8 19:52:15.473782 kubelet[2628]: E1008 19:52:15.473762 2628 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"899a83bf-2f3f-42fc-8f12-c8d235d4f83d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:52:15.473882 kubelet[2628]: E1008 19:52:15.473781 2628 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"779c09a7-b1aa-448c-b504-3cddbdcbc6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:52:15.473882 kubelet[2628]: E1008 19:52:15.473794 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"899a83bf-2f3f-42fc-8f12-c8d235d4f83d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l746d" podUID="899a83bf-2f3f-42fc-8f12-c8d235d4f83d" Oct 8 19:52:15.473882 kubelet[2628]: E1008 19:52:15.473818 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"779c09a7-b1aa-448c-b504-3cddbdcbc6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78df779756-sx78s" podUID="779c09a7-b1aa-448c-b504-3cddbdcbc6af" Oct 8 19:52:15.474762 containerd[1464]: time="2024-10-08T19:52:15.474730242Z" level=error msg="StopPodSandbox for \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\" failed" error="failed to destroy network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:15.474920 kubelet[2628]: E1008 19:52:15.474891 2628 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:52:15.474920 kubelet[2628]: E1008 19:52:15.474913 2628 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b"} Oct 8 19:52:15.474988 kubelet[2628]: E1008 19:52:15.474939 2628 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae7adb50-443a-4488-8328-041f1c3fd2cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:52:15.474988 kubelet[2628]: E1008 19:52:15.474963 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae7adb50-443a-4488-8328-041f1c3fd2cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:52:17.539402 systemd[1]: Started sshd@14-10.0.0.19:22-10.0.0.1:35472.service - OpenSSH per-connection server daemon (10.0.0.1:35472). Oct 8 19:52:17.607347 sshd[3819]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:17.609421 sshd[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:17.614465 systemd-logind[1442]: New session 15 of user core. Oct 8 19:52:17.621849 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:52:18.840786 sshd[3819]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:18.844791 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:35472.service: Deactivated successfully. Oct 8 19:52:18.847100 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:52:18.848037 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:52:18.849038 systemd-logind[1442]: Removed session 15. Oct 8 19:52:20.954348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501485606.mount: Deactivated successfully. Oct 8 19:52:23.851724 systemd[1]: Started sshd@15-10.0.0.19:22-10.0.0.1:35316.service - OpenSSH per-connection server daemon (10.0.0.1:35316). Oct 8 19:52:27.524559 kubelet[2628]: E1008 19:52:27.523848 2628 kubelet.go:2503] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.23s" Oct 8 19:52:27.525472 containerd[1464]: time="2024-10-08T19:52:27.525193598Z" level=info msg="StopPodSandbox for \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\"" Oct 8 19:52:27.580951 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 35316 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:27.585185 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:27.590236 systemd-logind[1442]: New session 16 of user core. Oct 8 19:52:27.599035 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:52:27.617968 containerd[1464]: time="2024-10-08T19:52:27.617884995Z" level=error msg="StopPodSandbox for \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\" failed" error="failed to destroy network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:27.618304 kubelet[2628]: E1008 19:52:27.618259 2628 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:52:27.618390 kubelet[2628]: E1008 19:52:27.618323 2628 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f"} Oct 8 19:52:27.618390 kubelet[2628]: E1008 19:52:27.618375 2628 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d890c593-8733-4509-ba00-18cbdb137a3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:52:27.618483 kubelet[2628]: E1008 19:52:27.618412 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d890c593-8733-4509-ba00-18cbdb137a3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qlplh" podUID="d890c593-8733-4509-ba00-18cbdb137a3b" Oct 8 19:52:28.166295 sshd[3845]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:28.179893 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:35316.service: Deactivated successfully. Oct 8 19:52:28.182021 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:52:28.184268 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:52:28.194997 systemd[1]: Started sshd@16-10.0.0.19:22-10.0.0.1:35328.service - OpenSSH per-connection server daemon (10.0.0.1:35328). Oct 8 19:52:28.196099 systemd-logind[1442]: Removed session 16. Oct 8 19:52:28.224682 sshd[3882]: Accepted publickey for core from 10.0.0.1 port 35328 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:28.226439 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:28.230707 systemd-logind[1442]: New session 17 of user core. Oct 8 19:52:28.240866 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:52:28.479894 sshd[3882]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:28.491820 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:35328.service: Deactivated successfully. Oct 8 19:52:28.494314 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:52:28.496331 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:52:28.505339 systemd[1]: Started sshd@17-10.0.0.19:22-10.0.0.1:35344.service - OpenSSH per-connection server daemon (10.0.0.1:35344). Oct 8 19:52:28.508035 systemd-logind[1442]: Removed session 17. Oct 8 19:52:28.541566 sshd[3894]: Accepted publickey for core from 10.0.0.1 port 35344 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:28.542817 containerd[1464]: time="2024-10-08T19:52:28.542680027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:28.543101 sshd[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:28.548100 systemd-logind[1442]: New session 18 of user core. Oct 8 19:52:28.560993 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:52:28.672784 containerd[1464]: time="2024-10-08T19:52:28.671975702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 19:52:28.771985 containerd[1464]: time="2024-10-08T19:52:28.771809355Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:28.842340 containerd[1464]: time="2024-10-08T19:52:28.842274645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:28.842997 containerd[1464]: time="2024-10-08T19:52:28.842967027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 18.419696563s" Oct 8 19:52:28.843053 containerd[1464]: time="2024-10-08T19:52:28.843000150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 19:52:28.851656 containerd[1464]: time="2024-10-08T19:52:28.851615259Z" level=info msg="CreateContainer within sandbox \"6fddcde53fdd96f36883ab5d0ca71ac0f74ed3fd9ea95021b6a90d51d32736ad\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:52:28.964658 sshd[3894]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:28.970476 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:35344.service: Deactivated successfully. Oct 8 19:52:28.973299 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:52:28.974116 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:52:28.975854 systemd-logind[1442]: Removed session 18. Oct 8 19:52:29.295314 containerd[1464]: time="2024-10-08T19:52:29.295207886Z" level=info msg="StopPodSandbox for \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\"" Oct 8 19:52:29.320269 containerd[1464]: time="2024-10-08T19:52:29.320194232Z" level=error msg="StopPodSandbox for \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\" failed" error="failed to destroy network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:29.320561 kubelet[2628]: E1008 19:52:29.320515 2628 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:52:29.320919 kubelet[2628]: E1008 19:52:29.320574 2628 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b"} Oct 8 19:52:29.320919 kubelet[2628]: E1008 19:52:29.320616 2628 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae7adb50-443a-4488-8328-041f1c3fd2cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:52:29.320919 kubelet[2628]: E1008 19:52:29.320647 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae7adb50-443a-4488-8328-041f1c3fd2cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88gsg" podUID="ae7adb50-443a-4488-8328-041f1c3fd2cd" Oct 8 19:52:29.746598 containerd[1464]: time="2024-10-08T19:52:29.746538215Z" level=info msg="CreateContainer within sandbox \"6fddcde53fdd96f36883ab5d0ca71ac0f74ed3fd9ea95021b6a90d51d32736ad\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"30cd39c531c4673825799cd03c199f6fe65675bad39fe2f74de2bd4e2139ba15\"" Oct 8 19:52:29.747340 containerd[1464]: time="2024-10-08T19:52:29.747143891Z" level=info msg="StartContainer for \"30cd39c531c4673825799cd03c199f6fe65675bad39fe2f74de2bd4e2139ba15\"" Oct 8 19:52:29.817880 systemd[1]: Started cri-containerd-30cd39c531c4673825799cd03c199f6fe65675bad39fe2f74de2bd4e2139ba15.scope - libcontainer container 30cd39c531c4673825799cd03c199f6fe65675bad39fe2f74de2bd4e2139ba15. Oct 8 19:52:30.020851 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:52:30.021734 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:52:30.041585 containerd[1464]: time="2024-10-08T19:52:30.041500648Z" level=info msg="StartContainer for \"30cd39c531c4673825799cd03c199f6fe65675bad39fe2f74de2bd4e2139ba15\" returns successfully" Oct 8 19:52:30.294457 containerd[1464]: time="2024-10-08T19:52:30.294284911Z" level=info msg="StopPodSandbox for \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\"" Oct 8 19:52:30.324176 containerd[1464]: time="2024-10-08T19:52:30.324097534Z" level=error msg="StopPodSandbox for \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\" failed" error="failed to destroy network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:52:30.324338 kubelet[2628]: E1008 19:52:30.324280 2628 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:52:30.324338 kubelet[2628]: E1008 19:52:30.324320 2628 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f"} Oct 8 19:52:30.324780 kubelet[2628]: E1008 19:52:30.324354 2628 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"899a83bf-2f3f-42fc-8f12-c8d235d4f83d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:52:30.324780 kubelet[2628]: E1008 19:52:30.324384 2628 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"899a83bf-2f3f-42fc-8f12-c8d235d4f83d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l746d" podUID="899a83bf-2f3f-42fc-8f12-c8d235d4f83d" Oct 8 19:52:30.533477 kubelet[2628]: E1008 19:52:30.533434 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:31.295060 containerd[1464]: time="2024-10-08T19:52:31.294984914Z" level=info msg="StopPodSandbox for \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\"" Oct 8 19:52:31.535561 kubelet[2628]: E1008 19:52:31.535522 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:32.119102 kubelet[2628]: I1008 19:52:32.118618 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-thbcg" podStartSLOduration=5.84433904 podStartE2EDuration="50.11856215s" podCreationTimestamp="2024-10-08 19:51:42 +0000 UTC" firstStartedPulling="2024-10-08 19:51:44.56907008 +0000 UTC m=+26.382888588" lastFinishedPulling="2024-10-08 19:52:28.84329319 +0000 UTC m=+70.657111698" observedRunningTime="2024-10-08 19:52:30.906301634 +0000 UTC m=+72.720120162" watchObservedRunningTime="2024-10-08 19:52:32.11856215 +0000 UTC m=+73.932380659" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.115 [INFO][4049] k8s.go 608: Cleaning up netns ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.116 [INFO][4049] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" iface="eth0" netns="/var/run/netns/cni-cd1c3527-9650-4eea-990d-dd1134446beb" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.116 [INFO][4049] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" iface="eth0" netns="/var/run/netns/cni-cd1c3527-9650-4eea-990d-dd1134446beb" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.117 [INFO][4049] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" iface="eth0" netns="/var/run/netns/cni-cd1c3527-9650-4eea-990d-dd1134446beb" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.117 [INFO][4049] k8s.go 615: Releasing IP address(es) ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.117 [INFO][4049] utils.go 188: Calico CNI releasing IP address ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.494 [INFO][4079] ipam_plugin.go 417: Releasing address using handleID ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" HandleID="k8s-pod-network.f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.494 [INFO][4079] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.494 [INFO][4079] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.501 [WARNING][4079] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" HandleID="k8s-pod-network.f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.501 [INFO][4079] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" HandleID="k8s-pod-network.f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.502 [INFO][4079] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:52:32.507951 containerd[1464]: 2024-10-08 19:52:32.505 [INFO][4049] k8s.go 621: Teardown processing complete. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:52:32.510337 systemd[1]: run-netns-cni\x2dcd1c3527\x2d9650\x2d4eea\x2d990d\x2ddd1134446beb.mount: Deactivated successfully. Oct 8 19:52:32.510839 containerd[1464]: time="2024-10-08T19:52:32.510788691Z" level=info msg="TearDown network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\" successfully" Oct 8 19:52:32.510839 containerd[1464]: time="2024-10-08T19:52:32.510822485Z" level=info msg="StopPodSandbox for \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\" returns successfully" Oct 8 19:52:32.511732 containerd[1464]: time="2024-10-08T19:52:32.511707032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78df779756-sx78s,Uid:779c09a7-b1aa-448c-b504-3cddbdcbc6af,Namespace:calico-system,Attempt:1,}" Oct 8 19:52:33.404825 kernel: bpftool[4222]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:52:33.655119 systemd-networkd[1399]: vxlan.calico: Link UP Oct 8 19:52:33.655130 systemd-networkd[1399]: vxlan.calico: Gained carrier Oct 8 19:52:33.983338 systemd[1]: Started sshd@18-10.0.0.19:22-10.0.0.1:50272.service - OpenSSH per-connection server daemon (10.0.0.1:50272). Oct 8 19:52:34.024611 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 50272 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:34.027145 sshd[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:34.032612 systemd-logind[1442]: New session 19 of user core. Oct 8 19:52:34.039868 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:52:34.169418 systemd-networkd[1399]: cali90791fad686: Link UP Oct 8 19:52:34.170185 sshd[4315]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:34.170198 systemd-networkd[1399]: cali90791fad686: Gained carrier Oct 8 19:52:34.176134 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:52:34.177260 systemd[1]: sshd@18-10.0.0.19:22-10.0.0.1:50272.service: Deactivated successfully. Oct 8 19:52:34.180611 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:52:34.182479 systemd-logind[1442]: Removed session 19. Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:33.887 [INFO][4265] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0 calico-kube-controllers-78df779756- calico-system 779c09a7-b1aa-448c-b504-3cddbdcbc6af 928 0 2024-10-08 19:51:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78df779756 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78df779756-sx78s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali90791fad686 [] []}} ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Namespace="calico-system" Pod="calico-kube-controllers-78df779756-sx78s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78df779756--sx78s-" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:33.887 [INFO][4265] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Namespace="calico-system" Pod="calico-kube-controllers-78df779756-sx78s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.001 [INFO][4308] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" HandleID="k8s-pod-network.e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.011 [INFO][4308] ipam_plugin.go 270: Auto assigning IP ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" HandleID="k8s-pod-network.e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011c710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78df779756-sx78s", "timestamp":"2024-10-08 19:52:34.00178191 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.011 [INFO][4308] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.011 [INFO][4308] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.012 [INFO][4308] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.022 [INFO][4308] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" host="localhost" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.028 [INFO][4308] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.053 [INFO][4308] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.055 [INFO][4308] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.057 [INFO][4308] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.057 [INFO][4308] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" host="localhost" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.059 [INFO][4308] ipam.go 1685: Creating new handle: k8s-pod-network.e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.077 [INFO][4308] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" host="localhost" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.159 [INFO][4308] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" host="localhost" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.159 [INFO][4308] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" host="localhost" Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.159 [INFO][4308] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:52:34.255715 containerd[1464]: 2024-10-08 19:52:34.159 [INFO][4308] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" HandleID="k8s-pod-network.e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:34.256714 containerd[1464]: 2024-10-08 19:52:34.165 [INFO][4265] k8s.go 386: Populated endpoint ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Namespace="calico-system" Pod="calico-kube-controllers-78df779756-sx78s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0", GenerateName:"calico-kube-controllers-78df779756-", Namespace:"calico-system", SelfLink:"", UID:"779c09a7-b1aa-448c-b504-3cddbdcbc6af", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78df779756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78df779756-sx78s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali90791fad686", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:52:34.256714 containerd[1464]: 2024-10-08 19:52:34.165 [INFO][4265] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Namespace="calico-system" Pod="calico-kube-controllers-78df779756-sx78s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:34.256714 containerd[1464]: 2024-10-08 19:52:34.165 [INFO][4265] dataplane_linux.go 68: Setting the host side veth name to cali90791fad686 ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Namespace="calico-system" Pod="calico-kube-controllers-78df779756-sx78s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:34.256714 containerd[1464]: 2024-10-08 19:52:34.169 [INFO][4265] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Namespace="calico-system" Pod="calico-kube-controllers-78df779756-sx78s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:34.256714 containerd[1464]: 2024-10-08 19:52:34.170 [INFO][4265] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Namespace="calico-system" Pod="calico-kube-controllers-78df779756-sx78s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0", GenerateName:"calico-kube-controllers-78df779756-", Namespace:"calico-system", SelfLink:"", UID:"779c09a7-b1aa-448c-b504-3cddbdcbc6af", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78df779756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b", Pod:"calico-kube-controllers-78df779756-sx78s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali90791fad686", MAC:"3e:31:ec:c3:df:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:52:34.256714 containerd[1464]: 2024-10-08 19:52:34.248 [INFO][4265] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b" Namespace="calico-system" Pod="calico-kube-controllers-78df779756-sx78s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:52:34.339405 containerd[1464]: time="2024-10-08T19:52:34.339186242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:34.339405 containerd[1464]: time="2024-10-08T19:52:34.339272666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:34.339405 containerd[1464]: time="2024-10-08T19:52:34.339285562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:34.339405 containerd[1464]: time="2024-10-08T19:52:34.339391834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:34.378979 systemd[1]: Started cri-containerd-e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b.scope - libcontainer container e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b. Oct 8 19:52:34.393625 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:52:34.420231 containerd[1464]: time="2024-10-08T19:52:34.420182199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78df779756-sx78s,Uid:779c09a7-b1aa-448c-b504-3cddbdcbc6af,Namespace:calico-system,Attempt:1,} returns sandbox id \"e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b\"" Oct 8 19:52:34.422714 containerd[1464]: time="2024-10-08T19:52:34.422620183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:52:34.981966 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Oct 8 19:52:36.005959 systemd-networkd[1399]: cali90791fad686: Gained IPv6LL Oct 8 19:52:36.294676 kubelet[2628]: E1008 19:52:36.294479 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:36.591524 containerd[1464]: time="2024-10-08T19:52:36.591455533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:36.594171 containerd[1464]: time="2024-10-08T19:52:36.593845743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 19:52:36.599665 containerd[1464]: time="2024-10-08T19:52:36.599600315Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:36.604203 containerd[1464]: time="2024-10-08T19:52:36.604086734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:36.605261 containerd[1464]: time="2024-10-08T19:52:36.605120391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.18246435s" Oct 8 19:52:36.605261 containerd[1464]: time="2024-10-08T19:52:36.605182079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 19:52:36.617779 containerd[1464]: time="2024-10-08T19:52:36.617651442Z" level=info msg="CreateContainer within sandbox \"e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:52:36.636056 containerd[1464]: time="2024-10-08T19:52:36.635970898Z" level=info msg="CreateContainer within sandbox \"e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cda9bc781eed61477c545266a45d491c51c505811f5ff8881b155d46f35434b1\"" Oct 8 19:52:36.636718 containerd[1464]: time="2024-10-08T19:52:36.636671381Z" level=info msg="StartContainer for \"cda9bc781eed61477c545266a45d491c51c505811f5ff8881b155d46f35434b1\"" Oct 8 19:52:36.673035 systemd[1]: Started cri-containerd-cda9bc781eed61477c545266a45d491c51c505811f5ff8881b155d46f35434b1.scope - libcontainer container cda9bc781eed61477c545266a45d491c51c505811f5ff8881b155d46f35434b1. Oct 8 19:52:37.275568 containerd[1464]: time="2024-10-08T19:52:37.275171325Z" level=info msg="StartContainer for \"cda9bc781eed61477c545266a45d491c51c505811f5ff8881b155d46f35434b1\" returns successfully" Oct 8 19:52:37.590599 kubelet[2628]: I1008 19:52:37.590161 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78df779756-sx78s" podStartSLOduration=52.406291522 podStartE2EDuration="54.590107816s" podCreationTimestamp="2024-10-08 19:51:43 +0000 UTC" firstStartedPulling="2024-10-08 19:52:34.421788979 +0000 UTC m=+76.235607488" lastFinishedPulling="2024-10-08 19:52:36.605605254 +0000 UTC m=+78.419423782" observedRunningTime="2024-10-08 19:52:37.588989057 +0000 UTC m=+79.402807565" watchObservedRunningTime="2024-10-08 19:52:37.590107816 +0000 UTC m=+79.403926324" Oct 8 19:52:38.295314 containerd[1464]: time="2024-10-08T19:52:38.295236644Z" level=info msg="StopPodSandbox for \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\"" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.346 [INFO][4476] k8s.go 608: Cleaning up netns ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.346 [INFO][4476] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" iface="eth0" netns="/var/run/netns/cni-e57fd5d8-feae-4edc-2970-2833fffa1a54" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.346 [INFO][4476] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" iface="eth0" netns="/var/run/netns/cni-e57fd5d8-feae-4edc-2970-2833fffa1a54" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.346 [INFO][4476] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" iface="eth0" netns="/var/run/netns/cni-e57fd5d8-feae-4edc-2970-2833fffa1a54" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.347 [INFO][4476] k8s.go 615: Releasing IP address(es) ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.347 [INFO][4476] utils.go 188: Calico CNI releasing IP address ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.372 [INFO][4483] ipam_plugin.go 417: Releasing address using handleID ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" HandleID="k8s-pod-network.e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.372 [INFO][4483] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.372 [INFO][4483] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.377 [WARNING][4483] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" HandleID="k8s-pod-network.e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.377 [INFO][4483] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" HandleID="k8s-pod-network.e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.378 [INFO][4483] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:52:38.384078 containerd[1464]: 2024-10-08 19:52:38.381 [INFO][4476] k8s.go 621: Teardown processing complete. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:52:38.384722 containerd[1464]: time="2024-10-08T19:52:38.384242962Z" level=info msg="TearDown network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\" successfully" Oct 8 19:52:38.384722 containerd[1464]: time="2024-10-08T19:52:38.384270925Z" level=info msg="StopPodSandbox for \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\" returns successfully" Oct 8 19:52:38.385814 kubelet[2628]: E1008 19:52:38.384650 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:38.385888 containerd[1464]: time="2024-10-08T19:52:38.385295694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qlplh,Uid:d890c593-8733-4509-ba00-18cbdb137a3b,Namespace:kube-system,Attempt:1,}" Oct 8 19:52:38.388286 systemd[1]: run-netns-cni\x2de57fd5d8\x2dfeae\x2d4edc\x2d2970\x2d2833fffa1a54.mount: Deactivated successfully. Oct 8 19:52:38.597248 systemd-networkd[1399]: calif9cd4f2120b: Link UP Oct 8 19:52:38.597643 systemd-networkd[1399]: calif9cd4f2120b: Gained carrier Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.437 [INFO][4496] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--qlplh-eth0 coredns-76f75df574- kube-system d890c593-8733-4509-ba00-18cbdb137a3b 978 0 2024-10-08 19:51:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-qlplh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9cd4f2120b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Namespace="kube-system" Pod="coredns-76f75df574-qlplh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qlplh-" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.437 [INFO][4496] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Namespace="kube-system" Pod="coredns-76f75df574-qlplh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.467 [INFO][4504] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" HandleID="k8s-pod-network.23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.476 [INFO][4504] ipam_plugin.go 270: Auto assigning IP ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" HandleID="k8s-pod-network.23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00057a720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-qlplh", "timestamp":"2024-10-08 19:52:38.467039108 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.476 [INFO][4504] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.476 [INFO][4504] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.476 [INFO][4504] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.478 [INFO][4504] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" host="localhost" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.482 [INFO][4504] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.486 [INFO][4504] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.488 [INFO][4504] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.490 [INFO][4504] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.491 [INFO][4504] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" host="localhost" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.492 [INFO][4504] ipam.go 1685: Creating new handle: k8s-pod-network.23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.499 [INFO][4504] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" host="localhost" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.591 [INFO][4504] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" host="localhost" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.591 [INFO][4504] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" host="localhost" Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.591 [INFO][4504] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:52:38.657809 containerd[1464]: 2024-10-08 19:52:38.591 [INFO][4504] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" HandleID="k8s-pod-network.23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.661923 containerd[1464]: 2024-10-08 19:52:38.594 [INFO][4496] k8s.go 386: Populated endpoint ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Namespace="kube-system" Pod="coredns-76f75df574-qlplh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qlplh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qlplh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d890c593-8733-4509-ba00-18cbdb137a3b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-qlplh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9cd4f2120b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:52:38.661923 containerd[1464]: 2024-10-08 19:52:38.595 [INFO][4496] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Namespace="kube-system" Pod="coredns-76f75df574-qlplh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.661923 containerd[1464]: 2024-10-08 19:52:38.595 [INFO][4496] dataplane_linux.go 68: Setting the host side veth name to calif9cd4f2120b ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Namespace="kube-system" Pod="coredns-76f75df574-qlplh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.661923 containerd[1464]: 2024-10-08 19:52:38.597 [INFO][4496] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Namespace="kube-system" Pod="coredns-76f75df574-qlplh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.661923 containerd[1464]: 2024-10-08 19:52:38.598 [INFO][4496] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Namespace="kube-system" Pod="coredns-76f75df574-qlplh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qlplh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qlplh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d890c593-8733-4509-ba00-18cbdb137a3b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b", Pod:"coredns-76f75df574-qlplh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9cd4f2120b", MAC:"6a:1c:90:fe:aa:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:52:38.661923 containerd[1464]: 2024-10-08 19:52:38.653 [INFO][4496] k8s.go 500: Wrote updated endpoint to datastore ContainerID="23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b" Namespace="kube-system" Pod="coredns-76f75df574-qlplh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:52:38.706724 containerd[1464]: time="2024-10-08T19:52:38.706314043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:38.706724 containerd[1464]: time="2024-10-08T19:52:38.706491150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:38.706724 containerd[1464]: time="2024-10-08T19:52:38.706519063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:38.706724 containerd[1464]: time="2024-10-08T19:52:38.706610036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:38.731957 systemd[1]: Started cri-containerd-23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b.scope - libcontainer container 23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b. Oct 8 19:52:38.748402 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:52:38.775994 containerd[1464]: time="2024-10-08T19:52:38.775915523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qlplh,Uid:d890c593-8733-4509-ba00-18cbdb137a3b,Namespace:kube-system,Attempt:1,} returns sandbox id \"23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b\"" Oct 8 19:52:38.776897 kubelet[2628]: E1008 19:52:38.776873 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:38.779540 containerd[1464]: time="2024-10-08T19:52:38.779499690Z" level=info msg="CreateContainer within sandbox \"23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:52:38.823948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792510352.mount: Deactivated successfully. Oct 8 19:52:38.828306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233547039.mount: Deactivated successfully. Oct 8 19:52:38.829988 containerd[1464]: time="2024-10-08T19:52:38.829934817Z" level=info msg="CreateContainer within sandbox \"23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f095cc3d42af2bf1c0e9ee09e5b470a85e26002fffe3ef27ede97a0a09cd5a0\"" Oct 8 19:52:38.830663 containerd[1464]: time="2024-10-08T19:52:38.830635439Z" level=info msg="StartContainer for \"7f095cc3d42af2bf1c0e9ee09e5b470a85e26002fffe3ef27ede97a0a09cd5a0\"" Oct 8 19:52:38.862876 systemd[1]: Started cri-containerd-7f095cc3d42af2bf1c0e9ee09e5b470a85e26002fffe3ef27ede97a0a09cd5a0.scope - libcontainer container 7f095cc3d42af2bf1c0e9ee09e5b470a85e26002fffe3ef27ede97a0a09cd5a0. Oct 8 19:52:38.898203 containerd[1464]: time="2024-10-08T19:52:38.898111427Z" level=info msg="StartContainer for \"7f095cc3d42af2bf1c0e9ee09e5b470a85e26002fffe3ef27ede97a0a09cd5a0\" returns successfully" Oct 8 19:52:39.185317 systemd[1]: Started sshd@19-10.0.0.19:22-10.0.0.1:50284.service - OpenSSH per-connection server daemon (10.0.0.1:50284). Oct 8 19:52:39.230302 sshd[4613]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:39.232384 sshd[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:39.237233 systemd-logind[1442]: New session 20 of user core. Oct 8 19:52:39.245854 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:52:39.371421 sshd[4613]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:39.375988 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:50284.service: Deactivated successfully. Oct 8 19:52:39.378373 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:52:39.378975 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:52:39.380094 systemd-logind[1442]: Removed session 20. Oct 8 19:52:39.578843 kubelet[2628]: E1008 19:52:39.577773 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:39.587633 kubelet[2628]: I1008 19:52:39.587578 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qlplh" podStartSLOduration=66.587529143 podStartE2EDuration="1m6.587529143s" podCreationTimestamp="2024-10-08 19:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:39.586456804 +0000 UTC m=+81.400275312" watchObservedRunningTime="2024-10-08 19:52:39.587529143 +0000 UTC m=+81.401347651" Oct 8 19:52:40.357931 systemd-networkd[1399]: calif9cd4f2120b: Gained IPv6LL Oct 8 19:52:40.580873 kubelet[2628]: E1008 19:52:40.580831 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:41.582464 kubelet[2628]: E1008 19:52:41.582416 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:43.294953 kubelet[2628]: E1008 19:52:43.294887 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:44.295099 containerd[1464]: time="2024-10-08T19:52:44.295039856Z" level=info msg="StopPodSandbox for \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\"" Oct 8 19:52:44.389902 systemd[1]: Started sshd@20-10.0.0.19:22-10.0.0.1:41506.service - OpenSSH per-connection server daemon (10.0.0.1:41506). Oct 8 19:52:44.432118 sshd[4703]: Accepted publickey for core from 10.0.0.1 port 41506 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:44.434355 sshd[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:44.439334 systemd-logind[1442]: New session 21 of user core. Oct 8 19:52:44.444242 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.447 [INFO][4695] k8s.go 608: Cleaning up netns ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.447 [INFO][4695] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" iface="eth0" netns="/var/run/netns/cni-6e32f17c-de03-d6e1-76c4-f02342087c49" Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.447 [INFO][4695] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" iface="eth0" netns="/var/run/netns/cni-6e32f17c-de03-d6e1-76c4-f02342087c49" Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.448 [INFO][4695] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" iface="eth0" netns="/var/run/netns/cni-6e32f17c-de03-d6e1-76c4-f02342087c49" Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.448 [INFO][4695] k8s.go 615: Releasing IP address(es) ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.448 [INFO][4695] utils.go 188: Calico CNI releasing IP address ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.478 [INFO][4709] ipam_plugin.go 417: Releasing address using handleID ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" HandleID="k8s-pod-network.04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.478 [INFO][4709] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.478 [INFO][4709] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.485 [WARNING][4709] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" HandleID="k8s-pod-network.04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.485 [INFO][4709] ipam_plugin.go 445: Releasing address using workloadID ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" HandleID="k8s-pod-network.04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.487 [INFO][4709] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:52:44.492602 containerd[1464]: 2024-10-08 19:52:44.489 [INFO][4695] k8s.go 621: Teardown processing complete. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:52:44.493211 containerd[1464]: time="2024-10-08T19:52:44.492808123Z" level=info msg="TearDown network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\" successfully" Oct 8 19:52:44.493211 containerd[1464]: time="2024-10-08T19:52:44.492891111Z" level=info msg="StopPodSandbox for \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\" returns successfully" Oct 8 19:52:44.493935 containerd[1464]: time="2024-10-08T19:52:44.493889266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88gsg,Uid:ae7adb50-443a-4488-8328-041f1c3fd2cd,Namespace:calico-system,Attempt:1,}" Oct 8 19:52:44.497630 systemd[1]: run-netns-cni\x2d6e32f17c\x2dde03\x2dd6e1\x2d76c4\x2df02342087c49.mount: Deactivated successfully. Oct 8 19:52:44.603261 sshd[4703]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:44.607423 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:52:44.609194 systemd[1]: sshd@20-10.0.0.19:22-10.0.0.1:41506.service: Deactivated successfully. Oct 8 19:52:44.614006 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:52:44.615672 systemd-logind[1442]: Removed session 21. Oct 8 19:52:44.709404 systemd-networkd[1399]: calic3268b13b19: Link UP Oct 8 19:52:44.709630 systemd-networkd[1399]: calic3268b13b19: Gained carrier Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.564 [INFO][4727] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--88gsg-eth0 csi-node-driver- calico-system ae7adb50-443a-4488-8328-041f1c3fd2cd 1024 0 2024-10-08 19:51:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-88gsg eth0 default [] [] [kns.calico-system ksa.calico-system.default] calic3268b13b19 [] []}} ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Namespace="calico-system" Pod="csi-node-driver-88gsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--88gsg-" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.564 [INFO][4727] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Namespace="calico-system" Pod="csi-node-driver-88gsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.607 [INFO][4743] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" HandleID="k8s-pod-network.5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.617 [INFO][4743] ipam_plugin.go 270: Auto assigning IP ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" HandleID="k8s-pod-network.5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-88gsg", "timestamp":"2024-10-08 19:52:44.607610604 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.617 [INFO][4743] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.618 [INFO][4743] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.618 [INFO][4743] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.619 [INFO][4743] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" host="localhost" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.622 [INFO][4743] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.632 [INFO][4743] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.634 [INFO][4743] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.637 [INFO][4743] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.637 [INFO][4743] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" host="localhost" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.642 [INFO][4743] ipam.go 1685: Creating new handle: k8s-pod-network.5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60 Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.682 [INFO][4743] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" host="localhost" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.701 [INFO][4743] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" host="localhost" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.702 [INFO][4743] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" host="localhost" Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.702 [INFO][4743] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:52:44.733736 containerd[1464]: 2024-10-08 19:52:44.702 [INFO][4743] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" HandleID="k8s-pod-network.5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.734361 containerd[1464]: 2024-10-08 19:52:44.705 [INFO][4727] k8s.go 386: Populated endpoint ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Namespace="calico-system" Pod="csi-node-driver-88gsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--88gsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--88gsg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae7adb50-443a-4488-8328-041f1c3fd2cd", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-88gsg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic3268b13b19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:52:44.734361 containerd[1464]: 2024-10-08 19:52:44.706 [INFO][4727] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Namespace="calico-system" Pod="csi-node-driver-88gsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.734361 containerd[1464]: 2024-10-08 19:52:44.706 [INFO][4727] dataplane_linux.go 68: Setting the host side veth name to calic3268b13b19 ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Namespace="calico-system" Pod="csi-node-driver-88gsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.734361 containerd[1464]: 2024-10-08 19:52:44.710 [INFO][4727] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Namespace="calico-system" Pod="csi-node-driver-88gsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.734361 containerd[1464]: 2024-10-08 19:52:44.710 [INFO][4727] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Namespace="calico-system" Pod="csi-node-driver-88gsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--88gsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--88gsg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae7adb50-443a-4488-8328-041f1c3fd2cd", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60", Pod:"csi-node-driver-88gsg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic3268b13b19", MAC:"72:32:5e:90:9b:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:52:44.734361 containerd[1464]: 2024-10-08 19:52:44.725 [INFO][4727] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60" Namespace="calico-system" Pod="csi-node-driver-88gsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:52:44.772270 containerd[1464]: time="2024-10-08T19:52:44.772101271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:44.772270 containerd[1464]: time="2024-10-08T19:52:44.772182576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:44.772270 containerd[1464]: time="2024-10-08T19:52:44.772214467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:44.772522 containerd[1464]: time="2024-10-08T19:52:44.772348140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:44.806355 systemd[1]: Started cri-containerd-5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60.scope - libcontainer container 5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60. Oct 8 19:52:44.827531 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:52:44.847749 containerd[1464]: time="2024-10-08T19:52:44.847439476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88gsg,Uid:ae7adb50-443a-4488-8328-041f1c3fd2cd,Namespace:calico-system,Attempt:1,} returns sandbox id \"5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60\"" Oct 8 19:52:44.852230 containerd[1464]: time="2024-10-08T19:52:44.852175618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:52:45.294918 kubelet[2628]: E1008 19:52:45.294869 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:45.295626 containerd[1464]: time="2024-10-08T19:52:45.295069956Z" level=info msg="StopPodSandbox for \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\"" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.727 [INFO][4826] k8s.go 608: Cleaning up netns ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.728 [INFO][4826] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" iface="eth0" netns="/var/run/netns/cni-1353d8da-a0fb-2313-c3ce-714996d87206" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.729 [INFO][4826] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" iface="eth0" netns="/var/run/netns/cni-1353d8da-a0fb-2313-c3ce-714996d87206" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.729 [INFO][4826] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" iface="eth0" netns="/var/run/netns/cni-1353d8da-a0fb-2313-c3ce-714996d87206" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.729 [INFO][4826] k8s.go 615: Releasing IP address(es) ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.729 [INFO][4826] utils.go 188: Calico CNI releasing IP address ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.753 [INFO][4834] ipam_plugin.go 417: Releasing address using handleID ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" HandleID="k8s-pod-network.a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.754 [INFO][4834] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.754 [INFO][4834] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.759 [WARNING][4834] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" HandleID="k8s-pod-network.a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.759 [INFO][4834] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" HandleID="k8s-pod-network.a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.760 [INFO][4834] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:52:45.767878 containerd[1464]: 2024-10-08 19:52:45.763 [INFO][4826] k8s.go 621: Teardown processing complete. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:52:45.769194 containerd[1464]: time="2024-10-08T19:52:45.768962083Z" level=info msg="TearDown network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\" successfully" Oct 8 19:52:45.769194 containerd[1464]: time="2024-10-08T19:52:45.768996719Z" level=info msg="StopPodSandbox for \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\" returns successfully" Oct 8 19:52:45.769740 kubelet[2628]: E1008 19:52:45.769564 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:45.770521 containerd[1464]: time="2024-10-08T19:52:45.770199372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l746d,Uid:899a83bf-2f3f-42fc-8f12-c8d235d4f83d,Namespace:kube-system,Attempt:1,}" Oct 8 19:52:45.774420 systemd[1]: run-netns-cni\x2d1353d8da\x2da0fb\x2d2313\x2dc3ce\x2d714996d87206.mount: Deactivated successfully. Oct 8 19:52:45.798101 systemd-networkd[1399]: calic3268b13b19: Gained IPv6LL Oct 8 19:52:46.480375 systemd-networkd[1399]: calib06a6fe01cf: Link UP Oct 8 19:52:46.481062 systemd-networkd[1399]: calib06a6fe01cf: Gained carrier Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.213 [INFO][4843] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--l746d-eth0 coredns-76f75df574- kube-system 899a83bf-2f3f-42fc-8f12-c8d235d4f83d 1035 0 2024-10-08 19:51:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-l746d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib06a6fe01cf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Namespace="kube-system" Pod="coredns-76f75df574-l746d" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l746d-" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.213 [INFO][4843] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Namespace="kube-system" Pod="coredns-76f75df574-l746d" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.250 [INFO][4856] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" HandleID="k8s-pod-network.aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.259 [INFO][4856] ipam_plugin.go 270: Auto assigning IP ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" HandleID="k8s-pod-network.aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddc10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-l746d", "timestamp":"2024-10-08 19:52:46.250930448 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.260 [INFO][4856] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.260 [INFO][4856] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.260 [INFO][4856] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.261 [INFO][4856] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" host="localhost" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.265 [INFO][4856] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.273 [INFO][4856] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.277 [INFO][4856] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.282 [INFO][4856] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.282 [INFO][4856] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" host="localhost" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.284 [INFO][4856] ipam.go 1685: Creating new handle: k8s-pod-network.aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510 Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.313 [INFO][4856] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" host="localhost" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.473 [INFO][4856] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" host="localhost" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.473 [INFO][4856] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" host="localhost" Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.474 [INFO][4856] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:52:46.905719 containerd[1464]: 2024-10-08 19:52:46.474 [INFO][4856] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" HandleID="k8s-pod-network.aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:46.907151 containerd[1464]: 2024-10-08 19:52:46.477 [INFO][4843] k8s.go 386: Populated endpoint ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Namespace="kube-system" Pod="coredns-76f75df574-l746d" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l746d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l746d-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"899a83bf-2f3f-42fc-8f12-c8d235d4f83d", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-l746d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib06a6fe01cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:52:46.907151 containerd[1464]: 2024-10-08 19:52:46.477 [INFO][4843] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Namespace="kube-system" Pod="coredns-76f75df574-l746d" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:46.907151 containerd[1464]: 2024-10-08 19:52:46.477 [INFO][4843] dataplane_linux.go 68: Setting the host side veth name to calib06a6fe01cf ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Namespace="kube-system" Pod="coredns-76f75df574-l746d" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:46.907151 containerd[1464]: 2024-10-08 19:52:46.480 [INFO][4843] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Namespace="kube-system" Pod="coredns-76f75df574-l746d" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:46.907151 containerd[1464]: 2024-10-08 19:52:46.481 [INFO][4843] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Namespace="kube-system" Pod="coredns-76f75df574-l746d" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l746d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l746d-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"899a83bf-2f3f-42fc-8f12-c8d235d4f83d", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510", Pod:"coredns-76f75df574-l746d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib06a6fe01cf", MAC:"42:00:32:f8:58:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:52:46.907151 containerd[1464]: 2024-10-08 19:52:46.901 [INFO][4843] k8s.go 500: Wrote updated endpoint to datastore ContainerID="aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510" Namespace="kube-system" Pod="coredns-76f75df574-l746d" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:52:47.268433 containerd[1464]: time="2024-10-08T19:52:47.267860951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:47.268561 containerd[1464]: time="2024-10-08T19:52:47.268436412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:47.268561 containerd[1464]: time="2024-10-08T19:52:47.268458154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:47.268813 containerd[1464]: time="2024-10-08T19:52:47.268593620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:47.296026 systemd[1]: Started cri-containerd-aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510.scope - libcontainer container aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510. Oct 8 19:52:47.312671 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:52:47.341189 containerd[1464]: time="2024-10-08T19:52:47.341128967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l746d,Uid:899a83bf-2f3f-42fc-8f12-c8d235d4f83d,Namespace:kube-system,Attempt:1,} returns sandbox id \"aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510\"" Oct 8 19:52:47.342195 kubelet[2628]: E1008 19:52:47.342174 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:47.344370 containerd[1464]: time="2024-10-08T19:52:47.344337644Z" level=info msg="CreateContainer within sandbox \"aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:52:48.165976 systemd-networkd[1399]: calib06a6fe01cf: Gained IPv6LL Oct 8 19:52:48.953566 containerd[1464]: time="2024-10-08T19:52:48.953463154Z" level=info msg="CreateContainer within sandbox \"aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be35d3f46220d8c0faa5024f13185e8076aaf0538f1513f15df794d5c71e784b\"" Oct 8 19:52:48.954555 containerd[1464]: time="2024-10-08T19:52:48.954486385Z" level=info msg="StartContainer for \"be35d3f46220d8c0faa5024f13185e8076aaf0538f1513f15df794d5c71e784b\"" Oct 8 19:52:48.989903 systemd[1]: Started cri-containerd-be35d3f46220d8c0faa5024f13185e8076aaf0538f1513f15df794d5c71e784b.scope - libcontainer container be35d3f46220d8c0faa5024f13185e8076aaf0538f1513f15df794d5c71e784b. Oct 8 19:52:49.123092 containerd[1464]: time="2024-10-08T19:52:49.123024256Z" level=info msg="StartContainer for \"be35d3f46220d8c0faa5024f13185e8076aaf0538f1513f15df794d5c71e784b\" returns successfully" Oct 8 19:52:49.259954 containerd[1464]: time="2024-10-08T19:52:49.259781042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:49.261148 containerd[1464]: time="2024-10-08T19:52:49.261084133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 19:52:49.262768 containerd[1464]: time="2024-10-08T19:52:49.262689247Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:49.266022 containerd[1464]: time="2024-10-08T19:52:49.265952375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:49.266769 containerd[1464]: time="2024-10-08T19:52:49.266724859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 4.414492194s" Oct 8 19:52:49.266769 containerd[1464]: time="2024-10-08T19:52:49.266769103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 19:52:49.278219 containerd[1464]: time="2024-10-08T19:52:49.278156045Z" level=info msg="CreateContainer within sandbox \"5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:52:49.387542 containerd[1464]: time="2024-10-08T19:52:49.387436672Z" level=info msg="CreateContainer within sandbox \"5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b2bae3a7f14f88f1c90965cda1feabb687d6562c4bf910b037f1d6c80c08838\"" Oct 8 19:52:49.389978 containerd[1464]: time="2024-10-08T19:52:49.388918823Z" level=info msg="StartContainer for \"9b2bae3a7f14f88f1c90965cda1feabb687d6562c4bf910b037f1d6c80c08838\"" Oct 8 19:52:49.435176 systemd[1]: Started cri-containerd-9b2bae3a7f14f88f1c90965cda1feabb687d6562c4bf910b037f1d6c80c08838.scope - libcontainer container 9b2bae3a7f14f88f1c90965cda1feabb687d6562c4bf910b037f1d6c80c08838. Oct 8 19:52:49.539668 containerd[1464]: time="2024-10-08T19:52:49.539414150Z" level=info msg="StartContainer for \"9b2bae3a7f14f88f1c90965cda1feabb687d6562c4bf910b037f1d6c80c08838\" returns successfully" Oct 8 19:52:49.542970 containerd[1464]: time="2024-10-08T19:52:49.542907454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:52:49.612028 kubelet[2628]: E1008 19:52:49.611880 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:49.625285 systemd[1]: Started sshd@21-10.0.0.19:22-10.0.0.1:41522.service - OpenSSH per-connection server daemon (10.0.0.1:41522). Oct 8 19:52:49.626708 kubelet[2628]: I1008 19:52:49.626636 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l746d" podStartSLOduration=76.626584693 podStartE2EDuration="1m16.626584693s" podCreationTimestamp="2024-10-08 19:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:49.624349294 +0000 UTC m=+91.438167802" watchObservedRunningTime="2024-10-08 19:52:49.626584693 +0000 UTC m=+91.440403201" Oct 8 19:52:49.675062 sshd[4996]: Accepted publickey for core from 10.0.0.1 port 41522 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:49.677596 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:49.682805 systemd-logind[1442]: New session 22 of user core. Oct 8 19:52:49.692416 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:52:49.848282 sshd[4996]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:49.855879 systemd[1]: sshd@21-10.0.0.19:22-10.0.0.1:41522.service: Deactivated successfully. Oct 8 19:52:49.858965 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:52:49.860142 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:52:49.862070 systemd-logind[1442]: Removed session 22. Oct 8 19:52:50.613784 kubelet[2628]: E1008 19:52:50.613739 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:51.616375 kubelet[2628]: E1008 19:52:51.616331 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:51.956315 containerd[1464]: time="2024-10-08T19:52:51.956124678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:51.961806 containerd[1464]: time="2024-10-08T19:52:51.961656954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 19:52:51.963996 containerd[1464]: time="2024-10-08T19:52:51.963928459Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:51.967375 containerd[1464]: time="2024-10-08T19:52:51.967283748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:51.968585 containerd[1464]: time="2024-10-08T19:52:51.967963306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.425004385s" Oct 8 19:52:51.968585 containerd[1464]: time="2024-10-08T19:52:51.968020224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 19:52:51.970451 containerd[1464]: time="2024-10-08T19:52:51.970389705Z" level=info msg="CreateContainer within sandbox \"5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:52:51.993102 containerd[1464]: time="2024-10-08T19:52:51.992994368Z" level=info msg="CreateContainer within sandbox \"5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7ce061e5c78678e970b8db0762091bbcd41ad8c8fd78253af56d535bde4fc5b4\"" Oct 8 19:52:51.993815 containerd[1464]: time="2024-10-08T19:52:51.993774397Z" level=info msg="StartContainer for \"7ce061e5c78678e970b8db0762091bbcd41ad8c8fd78253af56d535bde4fc5b4\"" Oct 8 19:52:52.038011 systemd[1]: Started cri-containerd-7ce061e5c78678e970b8db0762091bbcd41ad8c8fd78253af56d535bde4fc5b4.scope - libcontainer container 7ce061e5c78678e970b8db0762091bbcd41ad8c8fd78253af56d535bde4fc5b4. Oct 8 19:52:52.156242 containerd[1464]: time="2024-10-08T19:52:52.156058307Z" level=info msg="StartContainer for \"7ce061e5c78678e970b8db0762091bbcd41ad8c8fd78253af56d535bde4fc5b4\" returns successfully" Oct 8 19:52:52.426036 kubelet[2628]: I1008 19:52:52.425881 2628 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:52:52.433908 kubelet[2628]: I1008 19:52:52.433852 2628 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:52:52.816390 kubelet[2628]: I1008 19:52:52.815679 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-88gsg" podStartSLOduration=62.698968256 podStartE2EDuration="1m9.81563962s" podCreationTimestamp="2024-10-08 19:51:43 +0000 UTC" firstStartedPulling="2024-10-08 19:52:44.851642296 +0000 UTC m=+86.665460804" lastFinishedPulling="2024-10-08 19:52:51.96831366 +0000 UTC m=+93.782132168" observedRunningTime="2024-10-08 19:52:52.815207099 +0000 UTC m=+94.629025608" watchObservedRunningTime="2024-10-08 19:52:52.81563962 +0000 UTC m=+94.629458128" Oct 8 19:52:54.862415 systemd[1]: Started sshd@22-10.0.0.19:22-10.0.0.1:41608.service - OpenSSH per-connection server daemon (10.0.0.1:41608). Oct 8 19:52:54.919613 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 41608 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:52:54.922165 sshd[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:54.928981 systemd-logind[1442]: New session 23 of user core. Oct 8 19:52:54.943034 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:52:55.073459 sshd[5078]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:55.079140 systemd[1]: sshd@22-10.0.0.19:22-10.0.0.1:41608.service: Deactivated successfully. Oct 8 19:52:55.081639 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:52:55.082508 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:52:55.083729 systemd-logind[1442]: Removed session 23. Oct 8 19:52:55.294600 kubelet[2628]: E1008 19:52:55.294447 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:00.088763 systemd[1]: Started sshd@23-10.0.0.19:22-10.0.0.1:41616.service - OpenSSH per-connection server daemon (10.0.0.1:41616). Oct 8 19:53:00.126333 sshd[5096]: Accepted publickey for core from 10.0.0.1 port 41616 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:00.128171 sshd[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:00.132655 systemd-logind[1442]: New session 24 of user core. Oct 8 19:53:00.140930 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:53:00.284639 sshd[5096]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:00.295999 systemd[1]: sshd@23-10.0.0.19:22-10.0.0.1:41616.service: Deactivated successfully. Oct 8 19:53:00.299412 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:53:00.302854 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:53:00.312108 systemd[1]: Started sshd@24-10.0.0.19:22-10.0.0.1:41632.service - OpenSSH per-connection server daemon (10.0.0.1:41632). Oct 8 19:53:00.314000 systemd-logind[1442]: Removed session 24. Oct 8 19:53:00.347546 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 41632 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:00.349669 sshd[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:00.354904 systemd-logind[1442]: New session 25 of user core. Oct 8 19:53:00.361923 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:53:00.910353 sshd[5110]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:00.922949 systemd[1]: sshd@24-10.0.0.19:22-10.0.0.1:41632.service: Deactivated successfully. Oct 8 19:53:00.925152 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:53:00.927112 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:53:00.932995 systemd[1]: Started sshd@25-10.0.0.19:22-10.0.0.1:48662.service - OpenSSH per-connection server daemon (10.0.0.1:48662). Oct 8 19:53:00.934061 systemd-logind[1442]: Removed session 25. Oct 8 19:53:00.990546 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 48662 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:00.992575 sshd[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:00.998006 systemd-logind[1442]: New session 26 of user core. Oct 8 19:53:01.016046 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:53:05.319285 sshd[5124]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:05.329265 systemd[1]: sshd@25-10.0.0.19:22-10.0.0.1:48662.service: Deactivated successfully. Oct 8 19:53:05.331847 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:53:05.332760 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:53:05.342189 systemd[1]: Started sshd@26-10.0.0.19:22-10.0.0.1:48674.service - OpenSSH per-connection server daemon (10.0.0.1:48674). Oct 8 19:53:05.344242 systemd-logind[1442]: Removed session 26. Oct 8 19:53:05.374592 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 48674 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:05.376878 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:05.382491 systemd-logind[1442]: New session 27 of user core. Oct 8 19:53:05.393069 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:53:05.875498 sshd[5170]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:05.892948 systemd[1]: sshd@26-10.0.0.19:22-10.0.0.1:48674.service: Deactivated successfully. Oct 8 19:53:05.894950 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:53:05.896407 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:53:05.904275 systemd[1]: Started sshd@27-10.0.0.19:22-10.0.0.1:48686.service - OpenSSH per-connection server daemon (10.0.0.1:48686). Oct 8 19:53:05.905382 systemd-logind[1442]: Removed session 27. Oct 8 19:53:05.933288 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 48686 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:05.935028 sshd[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:05.940030 systemd-logind[1442]: New session 28 of user core. Oct 8 19:53:05.947879 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 19:53:06.106568 sshd[5183]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:06.111641 systemd[1]: sshd@27-10.0.0.19:22-10.0.0.1:48686.service: Deactivated successfully. Oct 8 19:53:06.114785 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 19:53:06.115636 systemd-logind[1442]: Session 28 logged out. Waiting for processes to exit. Oct 8 19:53:06.116831 systemd-logind[1442]: Removed session 28. Oct 8 19:53:10.733517 kubelet[2628]: E1008 19:53:10.733469 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:10.988225 update_engine[1444]: I20241008 19:53:10.987999 1444 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 8 19:53:10.988225 update_engine[1444]: I20241008 19:53:10.988080 1444 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 8 19:53:10.988824 update_engine[1444]: I20241008 19:53:10.988663 1444 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 8 19:53:10.989333 update_engine[1444]: I20241008 19:53:10.989297 1444 omaha_request_params.cc:62] Current group set to beta Oct 8 19:53:10.989531 update_engine[1444]: I20241008 19:53:10.989470 1444 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 8 19:53:10.989531 update_engine[1444]: I20241008 19:53:10.989492 1444 update_attempter.cc:643] Scheduling an action processor start. Oct 8 19:53:10.989531 update_engine[1444]: I20241008 19:53:10.989521 1444 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 19:53:10.989718 update_engine[1444]: I20241008 19:53:10.989596 1444 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 8 19:53:10.989718 update_engine[1444]: I20241008 19:53:10.989676 1444 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 19:53:10.989718 update_engine[1444]: I20241008 19:53:10.989707 1444 omaha_request_action.cc:272] Request: Oct 8 19:53:10.989718 update_engine[1444]: Oct 8 19:53:10.989718 update_engine[1444]: Oct 8 19:53:10.989718 update_engine[1444]: Oct 8 19:53:10.989718 update_engine[1444]: Oct 8 19:53:10.989718 update_engine[1444]: Oct 8 19:53:10.989718 update_engine[1444]: Oct 8 19:53:10.989718 update_engine[1444]: Oct 8 19:53:10.989718 update_engine[1444]: Oct 8 19:53:10.990008 update_engine[1444]: I20241008 19:53:10.989720 1444 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:53:10.994982 update_engine[1444]: I20241008 19:53:10.994928 1444 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:53:10.995340 update_engine[1444]: I20241008 19:53:10.995290 1444 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:53:10.996668 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 8 19:53:11.001130 update_engine[1444]: E20241008 19:53:11.001067 1444 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:53:11.001222 update_engine[1444]: I20241008 19:53:11.001188 1444 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 8 19:53:11.121665 systemd[1]: Started sshd@28-10.0.0.19:22-10.0.0.1:45184.service - OpenSSH per-connection server daemon (10.0.0.1:45184). Oct 8 19:53:11.175657 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 45184 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:11.177799 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:11.184244 systemd-logind[1442]: New session 29 of user core. Oct 8 19:53:11.194001 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 8 19:53:11.316317 sshd[5243]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:11.322378 systemd[1]: sshd@28-10.0.0.19:22-10.0.0.1:45184.service: Deactivated successfully. Oct 8 19:53:11.324873 systemd[1]: session-29.scope: Deactivated successfully. Oct 8 19:53:11.325689 systemd-logind[1442]: Session 29 logged out. Waiting for processes to exit. Oct 8 19:53:11.326878 systemd-logind[1442]: Removed session 29. Oct 8 19:53:12.270790 kubelet[2628]: I1008 19:53:12.270730 2628 topology_manager.go:215] "Topology Admit Handler" podUID="34fa83ab-963c-4fd4-8087-b360fe52b43a" podNamespace="calico-apiserver" podName="calico-apiserver-74db96b67d-w7854" Oct 8 19:53:12.283996 systemd[1]: Created slice kubepods-besteffort-pod34fa83ab_963c_4fd4_8087_b360fe52b43a.slice - libcontainer container kubepods-besteffort-pod34fa83ab_963c_4fd4_8087_b360fe52b43a.slice. Oct 8 19:53:12.322506 kubelet[2628]: I1008 19:53:12.322446 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/34fa83ab-963c-4fd4-8087-b360fe52b43a-calico-apiserver-certs\") pod \"calico-apiserver-74db96b67d-w7854\" (UID: \"34fa83ab-963c-4fd4-8087-b360fe52b43a\") " pod="calico-apiserver/calico-apiserver-74db96b67d-w7854" Oct 8 19:53:12.322506 kubelet[2628]: I1008 19:53:12.322497 2628 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd7qc\" (UniqueName: \"kubernetes.io/projected/34fa83ab-963c-4fd4-8087-b360fe52b43a-kube-api-access-jd7qc\") pod \"calico-apiserver-74db96b67d-w7854\" (UID: \"34fa83ab-963c-4fd4-8087-b360fe52b43a\") " pod="calico-apiserver/calico-apiserver-74db96b67d-w7854" Oct 8 19:53:12.424175 kubelet[2628]: E1008 19:53:12.424121 2628 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:53:12.424443 kubelet[2628]: E1008 19:53:12.424224 2628 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34fa83ab-963c-4fd4-8087-b360fe52b43a-calico-apiserver-certs podName:34fa83ab-963c-4fd4-8087-b360fe52b43a nodeName:}" failed. No retries permitted until 2024-10-08 19:53:12.924200654 +0000 UTC m=+114.738019162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/34fa83ab-963c-4fd4-8087-b360fe52b43a-calico-apiserver-certs") pod "calico-apiserver-74db96b67d-w7854" (UID: "34fa83ab-963c-4fd4-8087-b360fe52b43a") : secret "calico-apiserver-certs" not found Oct 8 19:53:13.190016 containerd[1464]: time="2024-10-08T19:53:13.189956833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74db96b67d-w7854,Uid:34fa83ab-963c-4fd4-8087-b360fe52b43a,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:53:13.749743 systemd-networkd[1399]: calia826a0d7fdf: Link UP Oct 8 19:53:13.750051 systemd-networkd[1399]: calia826a0d7fdf: Gained carrier Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.505 [INFO][5267] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0 calico-apiserver-74db96b67d- calico-apiserver 34fa83ab-963c-4fd4-8087-b360fe52b43a 1219 0 2024-10-08 19:53:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74db96b67d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74db96b67d-w7854 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia826a0d7fdf [] []}} ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Namespace="calico-apiserver" Pod="calico-apiserver-74db96b67d-w7854" WorkloadEndpoint="localhost-k8s-calico--apiserver--74db96b67d--w7854-" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.505 [INFO][5267] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Namespace="calico-apiserver" Pod="calico-apiserver-74db96b67d-w7854" WorkloadEndpoint="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.540 [INFO][5280] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" HandleID="k8s-pod-network.53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Workload="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.547 [INFO][5280] ipam_plugin.go 270: Auto assigning IP ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" HandleID="k8s-pod-network.53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Workload="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a26c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74db96b67d-w7854", "timestamp":"2024-10-08 19:53:13.540242877 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.547 [INFO][5280] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.547 [INFO][5280] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.547 [INFO][5280] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.549 [INFO][5280] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" host="localhost" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.553 [INFO][5280] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.557 [INFO][5280] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.558 [INFO][5280] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.561 [INFO][5280] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.561 [INFO][5280] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" host="localhost" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.562 [INFO][5280] ipam.go 1685: Creating new handle: k8s-pod-network.53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.570 [INFO][5280] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" host="localhost" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.740 [INFO][5280] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" host="localhost" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.740 [INFO][5280] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" host="localhost" Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.741 [INFO][5280] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:13.964817 containerd[1464]: 2024-10-08 19:53:13.741 [INFO][5280] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" HandleID="k8s-pod-network.53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Workload="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" Oct 8 19:53:13.966158 containerd[1464]: 2024-10-08 19:53:13.744 [INFO][5267] k8s.go 386: Populated endpoint ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Namespace="calico-apiserver" Pod="calico-apiserver-74db96b67d-w7854" WorkloadEndpoint="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0", GenerateName:"calico-apiserver-74db96b67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"34fa83ab-963c-4fd4-8087-b360fe52b43a", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74db96b67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74db96b67d-w7854", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia826a0d7fdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:13.966158 containerd[1464]: 2024-10-08 19:53:13.744 [INFO][5267] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Namespace="calico-apiserver" Pod="calico-apiserver-74db96b67d-w7854" WorkloadEndpoint="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" Oct 8 19:53:13.966158 containerd[1464]: 2024-10-08 19:53:13.744 [INFO][5267] dataplane_linux.go 68: Setting the host side veth name to calia826a0d7fdf ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Namespace="calico-apiserver" Pod="calico-apiserver-74db96b67d-w7854" WorkloadEndpoint="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" Oct 8 19:53:13.966158 containerd[1464]: 2024-10-08 19:53:13.747 [INFO][5267] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Namespace="calico-apiserver" Pod="calico-apiserver-74db96b67d-w7854" WorkloadEndpoint="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" Oct 8 19:53:13.966158 containerd[1464]: 2024-10-08 19:53:13.747 [INFO][5267] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Namespace="calico-apiserver" Pod="calico-apiserver-74db96b67d-w7854" WorkloadEndpoint="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0", GenerateName:"calico-apiserver-74db96b67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"34fa83ab-963c-4fd4-8087-b360fe52b43a", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74db96b67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b", Pod:"calico-apiserver-74db96b67d-w7854", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia826a0d7fdf", MAC:"5a:1f:9a:90:a7:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:13.966158 containerd[1464]: 2024-10-08 19:53:13.961 [INFO][5267] k8s.go 500: Wrote updated endpoint to datastore ContainerID="53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b" Namespace="calico-apiserver" Pod="calico-apiserver-74db96b67d-w7854" WorkloadEndpoint="localhost-k8s-calico--apiserver--74db96b67d--w7854-eth0" Oct 8 19:53:14.203067 containerd[1464]: time="2024-10-08T19:53:14.202898888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:14.203067 containerd[1464]: time="2024-10-08T19:53:14.202990400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:14.203067 containerd[1464]: time="2024-10-08T19:53:14.203021209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:14.203774 containerd[1464]: time="2024-10-08T19:53:14.203161514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:14.231895 systemd[1]: Started cri-containerd-53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b.scope - libcontainer container 53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b. Oct 8 19:53:14.247480 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:53:14.275370 containerd[1464]: time="2024-10-08T19:53:14.275304152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74db96b67d-w7854,Uid:34fa83ab-963c-4fd4-8087-b360fe52b43a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b\"" Oct 8 19:53:14.278463 containerd[1464]: time="2024-10-08T19:53:14.277144208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:53:15.110061 systemd-networkd[1399]: calia826a0d7fdf: Gained IPv6LL Oct 8 19:53:16.328833 systemd[1]: Started sshd@29-10.0.0.19:22-10.0.0.1:45188.service - OpenSSH per-connection server daemon (10.0.0.1:45188). Oct 8 19:53:16.643945 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 45188 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:16.648228 sshd[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:16.658770 systemd-logind[1442]: New session 30 of user core. Oct 8 19:53:16.664946 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 8 19:53:16.853762 sshd[5355]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:16.858101 systemd[1]: sshd@29-10.0.0.19:22-10.0.0.1:45188.service: Deactivated successfully. Oct 8 19:53:16.861184 systemd[1]: session-30.scope: Deactivated successfully. Oct 8 19:53:16.863804 systemd-logind[1442]: Session 30 logged out. Waiting for processes to exit. Oct 8 19:53:16.866181 systemd-logind[1442]: Removed session 30. Oct 8 19:53:18.808026 containerd[1464]: time="2024-10-08T19:53:18.807969383Z" level=info msg="StopPodSandbox for \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\"" Oct 8 19:53:18.883163 containerd[1464]: time="2024-10-08T19:53:18.883061687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:18.889240 containerd[1464]: time="2024-10-08T19:53:18.889147867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 19:53:18.895671 containerd[1464]: time="2024-10-08T19:53:18.895572236Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:18.908523 containerd[1464]: time="2024-10-08T19:53:18.907879370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:18.913446 containerd[1464]: time="2024-10-08T19:53:18.913382529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 4.636182376s" Oct 8 19:53:18.921733 containerd[1464]: time="2024-10-08T19:53:18.917607024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 19:53:18.926306 containerd[1464]: time="2024-10-08T19:53:18.923300072Z" level=info msg="CreateContainer within sandbox \"53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.886 [WARNING][5398] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qlplh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d890c593-8733-4509-ba00-18cbdb137a3b", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b", Pod:"coredns-76f75df574-qlplh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9cd4f2120b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.886 [INFO][5398] k8s.go 608: Cleaning up netns ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.886 [INFO][5398] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" iface="eth0" netns="" Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.887 [INFO][5398] k8s.go 615: Releasing IP address(es) ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.887 [INFO][5398] utils.go 188: Calico CNI releasing IP address ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.938 [INFO][5405] ipam_plugin.go 417: Releasing address using handleID ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" HandleID="k8s-pod-network.e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.938 [INFO][5405] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.938 [INFO][5405] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.947 [WARNING][5405] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" HandleID="k8s-pod-network.e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.947 [INFO][5405] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" HandleID="k8s-pod-network.e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.949 [INFO][5405] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:18.974734 containerd[1464]: 2024-10-08 19:53:18.957 [INFO][5398] k8s.go 621: Teardown processing complete. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:53:18.974734 containerd[1464]: time="2024-10-08T19:53:18.972209326Z" level=info msg="TearDown network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\" successfully" Oct 8 19:53:18.974734 containerd[1464]: time="2024-10-08T19:53:18.972245273Z" level=info msg="StopPodSandbox for \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\" returns successfully" Oct 8 19:53:18.989727 containerd[1464]: time="2024-10-08T19:53:18.989011586Z" level=info msg="RemovePodSandbox for \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\"" Oct 8 19:53:18.995706 containerd[1464]: time="2024-10-08T19:53:18.995558375Z" level=info msg="Forcibly stopping sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\"" Oct 8 19:53:18.996412 containerd[1464]: time="2024-10-08T19:53:18.996336284Z" level=info msg="CreateContainer within sandbox \"53169f469f0152ff44b33f66358b878c23beebfdd31d055a05c6b9dbc6cc523b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1c52a6328e1822e6879a947e06b3ae5017223e49c9446282218b18c403a83fb7\"" Oct 8 19:53:18.997505 containerd[1464]: time="2024-10-08T19:53:18.997466958Z" level=info msg="StartContainer for \"1c52a6328e1822e6879a947e06b3ae5017223e49c9446282218b18c403a83fb7\"" Oct 8 19:53:19.113571 systemd[1]: Started cri-containerd-1c52a6328e1822e6879a947e06b3ae5017223e49c9446282218b18c403a83fb7.scope - libcontainer container 1c52a6328e1822e6879a947e06b3ae5017223e49c9446282218b18c403a83fb7. Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.058 [WARNING][5431] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qlplh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d890c593-8733-4509-ba00-18cbdb137a3b", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23f63af24b2d08f9016163bd36cd8eb8a7770b7afcade9d340c53f0ff145012b", Pod:"coredns-76f75df574-qlplh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9cd4f2120b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.058 [INFO][5431] k8s.go 608: Cleaning up netns ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.058 [INFO][5431] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" iface="eth0" netns="" Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.058 [INFO][5431] k8s.go 615: Releasing IP address(es) ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.058 [INFO][5431] utils.go 188: Calico CNI releasing IP address ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.092 [INFO][5439] ipam_plugin.go 417: Releasing address using handleID ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" HandleID="k8s-pod-network.e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.092 [INFO][5439] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.092 [INFO][5439] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.105 [WARNING][5439] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" HandleID="k8s-pod-network.e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.105 [INFO][5439] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" HandleID="k8s-pod-network.e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Workload="localhost-k8s-coredns--76f75df574--qlplh-eth0" Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.114 [INFO][5439] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:19.124889 containerd[1464]: 2024-10-08 19:53:19.121 [INFO][5431] k8s.go 621: Teardown processing complete. ContainerID="e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f" Oct 8 19:53:19.125583 containerd[1464]: time="2024-10-08T19:53:19.124949467Z" level=info msg="TearDown network for sandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\" successfully" Oct 8 19:53:19.146379 containerd[1464]: time="2024-10-08T19:53:19.146302024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:53:19.146567 containerd[1464]: time="2024-10-08T19:53:19.146423533Z" level=info msg="RemovePodSandbox \"e3436844bd21d608f711a72d34bae3c37907352c26af91d2a6416bacb177413f\" returns successfully" Oct 8 19:53:19.150057 containerd[1464]: time="2024-10-08T19:53:19.149301025Z" level=info msg="StopPodSandbox for \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\"" Oct 8 19:53:19.181460 containerd[1464]: time="2024-10-08T19:53:19.181403144Z" level=info msg="StartContainer for \"1c52a6328e1822e6879a947e06b3ae5017223e49c9446282218b18c403a83fb7\" returns successfully" Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.197 [WARNING][5484] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l746d-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"899a83bf-2f3f-42fc-8f12-c8d235d4f83d", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510", Pod:"coredns-76f75df574-l746d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib06a6fe01cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.197 [INFO][5484] k8s.go 608: Cleaning up netns ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.197 [INFO][5484] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" iface="eth0" netns="" Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.197 [INFO][5484] k8s.go 615: Releasing IP address(es) ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.197 [INFO][5484] utils.go 188: Calico CNI releasing IP address ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.230 [INFO][5504] ipam_plugin.go 417: Releasing address using handleID ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" HandleID="k8s-pod-network.a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.230 [INFO][5504] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.230 [INFO][5504] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.237 [WARNING][5504] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" HandleID="k8s-pod-network.a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.237 [INFO][5504] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" HandleID="k8s-pod-network.a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.239 [INFO][5504] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:19.245363 containerd[1464]: 2024-10-08 19:53:19.241 [INFO][5484] k8s.go 621: Teardown processing complete. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:53:19.246154 containerd[1464]: time="2024-10-08T19:53:19.245377167Z" level=info msg="TearDown network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\" successfully" Oct 8 19:53:19.246154 containerd[1464]: time="2024-10-08T19:53:19.245405259Z" level=info msg="StopPodSandbox for \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\" returns successfully" Oct 8 19:53:19.246154 containerd[1464]: time="2024-10-08T19:53:19.246106523Z" level=info msg="RemovePodSandbox for \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\"" Oct 8 19:53:19.246154 containerd[1464]: time="2024-10-08T19:53:19.246131029Z" level=info msg="Forcibly stopping sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\"" Oct 8 19:53:19.295259 kubelet[2628]: E1008 19:53:19.295115 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.286 [WARNING][5529] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l746d-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"899a83bf-2f3f-42fc-8f12-c8d235d4f83d", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaa503bcce31c56d918fc0059da52a8c1bbc9f9e1aaf410936c917e612b52510", Pod:"coredns-76f75df574-l746d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib06a6fe01cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.286 [INFO][5529] k8s.go 608: Cleaning up netns ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.286 [INFO][5529] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" iface="eth0" netns="" Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.286 [INFO][5529] k8s.go 615: Releasing IP address(es) ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.286 [INFO][5529] utils.go 188: Calico CNI releasing IP address ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.312 [INFO][5537] ipam_plugin.go 417: Releasing address using handleID ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" HandleID="k8s-pod-network.a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.312 [INFO][5537] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.312 [INFO][5537] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.317 [WARNING][5537] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" HandleID="k8s-pod-network.a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.317 [INFO][5537] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" HandleID="k8s-pod-network.a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Workload="localhost-k8s-coredns--76f75df574--l746d-eth0" Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.319 [INFO][5537] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:19.325022 containerd[1464]: 2024-10-08 19:53:19.322 [INFO][5529] k8s.go 621: Teardown processing complete. ContainerID="a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f" Oct 8 19:53:19.325738 containerd[1464]: time="2024-10-08T19:53:19.325071645Z" level=info msg="TearDown network for sandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\" successfully" Oct 8 19:53:19.331619 containerd[1464]: time="2024-10-08T19:53:19.331570383Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:53:19.331762 containerd[1464]: time="2024-10-08T19:53:19.331670842Z" level=info msg="RemovePodSandbox \"a2e47cbf2ff4bfc4f4866ef2ecb01b73478b8bac2a42a11a5c1d632fad63940f\" returns successfully" Oct 8 19:53:19.332366 containerd[1464]: time="2024-10-08T19:53:19.332327341Z" level=info msg="StopPodSandbox for \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\"" Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.373 [WARNING][5559] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0", GenerateName:"calico-kube-controllers-78df779756-", Namespace:"calico-system", SelfLink:"", UID:"779c09a7-b1aa-448c-b504-3cddbdcbc6af", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78df779756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b", Pod:"calico-kube-controllers-78df779756-sx78s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali90791fad686", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.374 [INFO][5559] k8s.go 608: Cleaning up netns ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.375 [INFO][5559] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" iface="eth0" netns="" Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.375 [INFO][5559] k8s.go 615: Releasing IP address(es) ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.375 [INFO][5559] utils.go 188: Calico CNI releasing IP address ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.400 [INFO][5567] ipam_plugin.go 417: Releasing address using handleID ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" HandleID="k8s-pod-network.f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.400 [INFO][5567] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.400 [INFO][5567] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.405 [WARNING][5567] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" HandleID="k8s-pod-network.f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.405 [INFO][5567] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" HandleID="k8s-pod-network.f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.407 [INFO][5567] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:19.413092 containerd[1464]: 2024-10-08 19:53:19.409 [INFO][5559] k8s.go 621: Teardown processing complete. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:53:19.413092 containerd[1464]: time="2024-10-08T19:53:19.413052245Z" level=info msg="TearDown network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\" successfully" Oct 8 19:53:19.413092 containerd[1464]: time="2024-10-08T19:53:19.413081400Z" level=info msg="StopPodSandbox for \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\" returns successfully" Oct 8 19:53:19.413950 containerd[1464]: time="2024-10-08T19:53:19.413636268Z" level=info msg="RemovePodSandbox for \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\"" Oct 8 19:53:19.413950 containerd[1464]: time="2024-10-08T19:53:19.413719555Z" level=info msg="Forcibly stopping sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\"" Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.452 [WARNING][5589] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0", GenerateName:"calico-kube-controllers-78df779756-", Namespace:"calico-system", SelfLink:"", UID:"779c09a7-b1aa-448c-b504-3cddbdcbc6af", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78df779756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6f2749c193400ef6fd51ce659845d7ce2b95eb53b8135f3cd8af21db9be674b", Pod:"calico-kube-controllers-78df779756-sx78s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali90791fad686", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.452 [INFO][5589] k8s.go 608: Cleaning up netns ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.452 [INFO][5589] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" iface="eth0" netns="" Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.452 [INFO][5589] k8s.go 615: Releasing IP address(es) ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.452 [INFO][5589] utils.go 188: Calico CNI releasing IP address ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.474 [INFO][5597] ipam_plugin.go 417: Releasing address using handleID ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" HandleID="k8s-pod-network.f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.474 [INFO][5597] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.475 [INFO][5597] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.481 [WARNING][5597] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" HandleID="k8s-pod-network.f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.481 [INFO][5597] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" HandleID="k8s-pod-network.f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Workload="localhost-k8s-calico--kube--controllers--78df779756--sx78s-eth0" Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.482 [INFO][5597] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:19.490728 containerd[1464]: 2024-10-08 19:53:19.485 [INFO][5589] k8s.go 621: Teardown processing complete. ContainerID="f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699" Oct 8 19:53:19.490728 containerd[1464]: time="2024-10-08T19:53:19.488504526Z" level=info msg="TearDown network for sandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\" successfully" Oct 8 19:53:19.493884 containerd[1464]: time="2024-10-08T19:53:19.493834116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:53:19.493884 containerd[1464]: time="2024-10-08T19:53:19.493906804Z" level=info msg="RemovePodSandbox \"f2f588bb685a17ced552af70fd7e59c8eccb87b8c90c5094fdb52da6b1bc9699\" returns successfully" Oct 8 19:53:19.494547 containerd[1464]: time="2024-10-08T19:53:19.494486127Z" level=info msg="StopPodSandbox for \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\"" Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.539 [WARNING][5619] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--88gsg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae7adb50-443a-4488-8328-041f1c3fd2cd", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60", Pod:"csi-node-driver-88gsg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic3268b13b19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.539 [INFO][5619] k8s.go 608: Cleaning up netns ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.539 [INFO][5619] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" iface="eth0" netns="" Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.539 [INFO][5619] k8s.go 615: Releasing IP address(es) ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.539 [INFO][5619] utils.go 188: Calico CNI releasing IP address ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.564 [INFO][5627] ipam_plugin.go 417: Releasing address using handleID ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" HandleID="k8s-pod-network.04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.564 [INFO][5627] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.564 [INFO][5627] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.571 [WARNING][5627] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" HandleID="k8s-pod-network.04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.571 [INFO][5627] ipam_plugin.go 445: Releasing address using workloadID ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" HandleID="k8s-pod-network.04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.573 [INFO][5627] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:19.578308 containerd[1464]: 2024-10-08 19:53:19.575 [INFO][5619] k8s.go 621: Teardown processing complete. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:53:19.578926 containerd[1464]: time="2024-10-08T19:53:19.578371387Z" level=info msg="TearDown network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\" successfully" Oct 8 19:53:19.578926 containerd[1464]: time="2024-10-08T19:53:19.578401784Z" level=info msg="StopPodSandbox for \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\" returns successfully" Oct 8 19:53:19.579355 containerd[1464]: time="2024-10-08T19:53:19.579314697Z" level=info msg="RemovePodSandbox for \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\"" Oct 8 19:53:19.579418 containerd[1464]: time="2024-10-08T19:53:19.579356266Z" level=info msg="Forcibly stopping sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\"" Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.622 [WARNING][5651] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--88gsg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae7adb50-443a-4488-8328-041f1c3fd2cd", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5dc01517ee8dbb1603f1db22bd9d9841b5f69a05a404a62bc53a43a1da279e60", Pod:"csi-node-driver-88gsg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic3268b13b19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.622 [INFO][5651] k8s.go 608: Cleaning up netns ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.622 [INFO][5651] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" iface="eth0" netns="" Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.622 [INFO][5651] k8s.go 615: Releasing IP address(es) ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.622 [INFO][5651] utils.go 188: Calico CNI releasing IP address ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.655 [INFO][5658] ipam_plugin.go 417: Releasing address using handleID ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" HandleID="k8s-pod-network.04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.655 [INFO][5658] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.655 [INFO][5658] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.660 [WARNING][5658] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" HandleID="k8s-pod-network.04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.660 [INFO][5658] ipam_plugin.go 445: Releasing address using workloadID ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" HandleID="k8s-pod-network.04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Workload="localhost-k8s-csi--node--driver--88gsg-eth0" Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.661 [INFO][5658] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:19.666981 containerd[1464]: 2024-10-08 19:53:19.664 [INFO][5651] k8s.go 621: Teardown processing complete. ContainerID="04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b" Oct 8 19:53:19.666981 containerd[1464]: time="2024-10-08T19:53:19.666950957Z" level=info msg="TearDown network for sandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\" successfully" Oct 8 19:53:19.672006 containerd[1464]: time="2024-10-08T19:53:19.671903115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:53:19.672006 containerd[1464]: time="2024-10-08T19:53:19.671987625Z" level=info msg="RemovePodSandbox \"04d64ac010be8ea81c3c5227aace1dd4e8f65be4e603987ff5142a3ab8c2fc3b\" returns successfully" Oct 8 19:53:19.699578 kubelet[2628]: I1008 19:53:19.699520 2628 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74db96b67d-w7854" podStartSLOduration=3.057749583 podStartE2EDuration="7.699466417s" podCreationTimestamp="2024-10-08 19:53:12 +0000 UTC" firstStartedPulling="2024-10-08 19:53:14.276860272 +0000 UTC m=+116.090678780" lastFinishedPulling="2024-10-08 19:53:18.918577105 +0000 UTC m=+120.732395614" observedRunningTime="2024-10-08 19:53:19.695336761 +0000 UTC m=+121.509155269" watchObservedRunningTime="2024-10-08 19:53:19.699466417 +0000 UTC m=+121.513284925" Oct 8 19:53:20.966259 update_engine[1444]: I20241008 19:53:20.966149 1444 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:53:20.966834 update_engine[1444]: I20241008 19:53:20.966547 1444 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:53:20.966888 update_engine[1444]: I20241008 19:53:20.966856 1444 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:53:20.974994 update_engine[1444]: E20241008 19:53:20.974949 1444 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:53:20.975049 update_engine[1444]: I20241008 19:53:20.975010 1444 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 8 19:53:21.873283 systemd[1]: Started sshd@30-10.0.0.19:22-10.0.0.1:39106.service - OpenSSH per-connection server daemon (10.0.0.1:39106). Oct 8 19:53:21.916643 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 39106 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:21.918539 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:21.923363 systemd-logind[1442]: New session 31 of user core. Oct 8 19:53:21.933913 systemd[1]: Started session-31.scope - Session 31 of User core. Oct 8 19:53:22.108334 sshd[5671]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:22.113358 systemd[1]: sshd@30-10.0.0.19:22-10.0.0.1:39106.service: Deactivated successfully. Oct 8 19:53:22.115826 systemd[1]: session-31.scope: Deactivated successfully. Oct 8 19:53:22.117316 systemd-logind[1442]: Session 31 logged out. Waiting for processes to exit. Oct 8 19:53:22.118466 systemd-logind[1442]: Removed session 31. Oct 8 19:53:27.119178 systemd[1]: Started sshd@31-10.0.0.19:22-10.0.0.1:39114.service - OpenSSH per-connection server daemon (10.0.0.1:39114). Oct 8 19:53:27.157608 sshd[5712]: Accepted publickey for core from 10.0.0.1 port 39114 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:27.160415 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:27.165571 systemd-logind[1442]: New session 32 of user core. Oct 8 19:53:27.172949 systemd[1]: Started session-32.scope - Session 32 of User core. Oct 8 19:53:27.292861 sshd[5712]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:27.298102 systemd[1]: sshd@31-10.0.0.19:22-10.0.0.1:39114.service: Deactivated successfully. Oct 8 19:53:27.300884 systemd[1]: session-32.scope: Deactivated successfully. Oct 8 19:53:27.301817 systemd-logind[1442]: Session 32 logged out. Waiting for processes to exit. Oct 8 19:53:27.303736 systemd-logind[1442]: Removed session 32.