Nov 12 22:51:49.887976 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 21:10:03 -00 2024 Nov 12 22:51:49.888000 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:51:49.888011 kernel: BIOS-provided physical RAM map: Nov 12 22:51:49.888017 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 22:51:49.888023 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 22:51:49.888029 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 22:51:49.888036 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 12 22:51:49.888042 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 12 22:51:49.888048 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 22:51:49.888057 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 12 22:51:49.888063 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 22:51:49.888069 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 22:51:49.888075 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 22:51:49.888081 kernel: NX (Execute Disable) protection: active Nov 12 22:51:49.888088 kernel: APIC: Static calls initialized Nov 12 22:51:49.888097 kernel: SMBIOS 2.8 present. Nov 12 22:51:49.888104 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 12 22:51:49.888110 kernel: Hypervisor detected: KVM Nov 12 22:51:49.888116 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 22:51:49.888123 kernel: kvm-clock: using sched offset of 2257360413 cycles Nov 12 22:51:49.888129 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 22:51:49.888136 kernel: tsc: Detected 2794.748 MHz processor Nov 12 22:51:49.888143 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 22:51:49.888150 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 22:51:49.888157 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 12 22:51:49.888166 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 22:51:49.888172 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 22:51:49.888179 kernel: Using GB pages for direct mapping Nov 12 22:51:49.888186 kernel: ACPI: Early table checksum verification disabled Nov 12 22:51:49.888192 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 12 22:51:49.888199 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:51:49.888205 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:51:49.888212 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:51:49.888221 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 12 22:51:49.888228 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:51:49.888234 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:51:49.888241 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:51:49.888247 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:51:49.888254 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Nov 12 22:51:49.888261 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Nov 12 22:51:49.888271 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 12 22:51:49.888280 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Nov 12 22:51:49.888287 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Nov 12 22:51:49.888294 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Nov 12 22:51:49.888301 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Nov 12 22:51:49.888308 kernel: No NUMA configuration found Nov 12 22:51:49.888314 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 12 22:51:49.888321 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 12 22:51:49.888338 kernel: Zone ranges: Nov 12 22:51:49.888345 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 22:51:49.888352 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 12 22:51:49.888359 kernel: Normal empty Nov 12 22:51:49.888366 kernel: Movable zone start for each node Nov 12 22:51:49.888372 kernel: Early memory node ranges Nov 12 22:51:49.888379 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 22:51:49.888386 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 12 22:51:49.888393 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 12 22:51:49.888402 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 22:51:49.888409 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 22:51:49.888416 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 12 22:51:49.888423 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 22:51:49.888429 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 22:51:49.888436 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 22:51:49.888443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 22:51:49.888450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 22:51:49.888457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 22:51:49.888466 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 22:51:49.888473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 22:51:49.888480 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 22:51:49.888487 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 22:51:49.888493 kernel: TSC deadline timer available Nov 12 22:51:49.888500 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 22:51:49.888507 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 22:51:49.888514 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 22:51:49.888521 kernel: kvm-guest: setup PV sched yield Nov 12 22:51:49.888527 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 12 22:51:49.888537 kernel: Booting paravirtualized kernel on KVM Nov 12 22:51:49.888544 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 22:51:49.888551 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 22:51:49.888558 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 22:51:49.888565 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 22:51:49.888572 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 22:51:49.888578 kernel: kvm-guest: PV spinlocks enabled Nov 12 22:51:49.888585 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 22:51:49.888593 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:51:49.888603 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 22:51:49.888610 kernel: random: crng init done Nov 12 22:51:49.888617 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 22:51:49.888624 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:51:49.888631 kernel: Fallback order for Node 0: 0 Nov 12 22:51:49.888638 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 12 22:51:49.888644 kernel: Policy zone: DMA32 Nov 12 22:51:49.888651 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 22:51:49.888661 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22736K rodata, 42968K init, 2220K bss, 136900K reserved, 0K cma-reserved) Nov 12 22:51:49.888668 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 22:51:49.888675 kernel: ftrace: allocating 37801 entries in 148 pages Nov 12 22:51:49.888682 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 22:51:49.888689 kernel: Dynamic Preempt: voluntary Nov 12 22:51:49.888696 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 22:51:49.888704 kernel: rcu: RCU event tracing is enabled. Nov 12 22:51:49.888711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 22:51:49.888718 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 22:51:49.888738 kernel: Rude variant of Tasks RCU enabled. Nov 12 22:51:49.888746 kernel: Tracing variant of Tasks RCU enabled. Nov 12 22:51:49.888753 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 22:51:49.888760 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 22:51:49.888766 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 22:51:49.888773 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 22:51:49.888780 kernel: Console: colour VGA+ 80x25 Nov 12 22:51:49.888787 kernel: printk: console [ttyS0] enabled Nov 12 22:51:49.888794 kernel: ACPI: Core revision 20230628 Nov 12 22:51:49.888804 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 22:51:49.888811 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 22:51:49.888818 kernel: x2apic enabled Nov 12 22:51:49.888825 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 22:51:49.888832 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 22:51:49.888839 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 22:51:49.888846 kernel: kvm-guest: setup PV IPIs Nov 12 22:51:49.888864 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 22:51:49.888871 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 22:51:49.888878 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 12 22:51:49.888885 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 22:51:49.888893 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 22:51:49.888902 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 22:51:49.888910 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 22:51:49.888917 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 22:51:49.888924 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 22:51:49.888934 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 22:51:49.888941 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 22:51:49.888948 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 22:51:49.888955 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 22:51:49.888962 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 22:51:49.888970 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 22:51:49.888977 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 22:51:49.888984 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 22:51:49.888992 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 22:51:49.889001 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 22:51:49.889009 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 22:51:49.889016 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 22:51:49.889023 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 22:51:49.889030 kernel: Freeing SMP alternatives memory: 32K Nov 12 22:51:49.889037 kernel: pid_max: default: 32768 minimum: 301 Nov 12 22:51:49.889044 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 22:51:49.889051 kernel: landlock: Up and running. Nov 12 22:51:49.889058 kernel: SELinux: Initializing. Nov 12 22:51:49.889068 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:51:49.889075 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:51:49.889082 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 22:51:49.889090 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:51:49.889097 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:51:49.889104 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:51:49.889111 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 22:51:49.889118 kernel: ... version: 0 Nov 12 22:51:49.889128 kernel: ... bit width: 48 Nov 12 22:51:49.889135 kernel: ... generic registers: 6 Nov 12 22:51:49.889142 kernel: ... value mask: 0000ffffffffffff Nov 12 22:51:49.889149 kernel: ... max period: 00007fffffffffff Nov 12 22:51:49.889156 kernel: ... fixed-purpose events: 0 Nov 12 22:51:49.889163 kernel: ... event mask: 000000000000003f Nov 12 22:51:49.889170 kernel: signal: max sigframe size: 1776 Nov 12 22:51:49.889177 kernel: rcu: Hierarchical SRCU implementation. Nov 12 22:51:49.889185 kernel: rcu: Max phase no-delay instances is 400. Nov 12 22:51:49.889192 kernel: smp: Bringing up secondary CPUs ... Nov 12 22:51:49.889202 kernel: smpboot: x86: Booting SMP configuration: Nov 12 22:51:49.889209 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 22:51:49.889216 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 22:51:49.889223 kernel: smpboot: Max logical packages: 1 Nov 12 22:51:49.889230 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 12 22:51:49.889237 kernel: devtmpfs: initialized Nov 12 22:51:49.889244 kernel: x86/mm: Memory block size: 128MB Nov 12 22:51:49.889251 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 22:51:49.889259 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 22:51:49.889269 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 22:51:49.889276 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 22:51:49.889283 kernel: audit: initializing netlink subsys (disabled) Nov 12 22:51:49.889290 kernel: audit: type=2000 audit(1731451910.018:1): state=initialized audit_enabled=0 res=1 Nov 12 22:51:49.889297 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 22:51:49.889304 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 22:51:49.889311 kernel: cpuidle: using governor menu Nov 12 22:51:49.889318 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 22:51:49.889325 kernel: dca service started, version 1.12.1 Nov 12 22:51:49.889343 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 22:51:49.889350 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 22:51:49.889357 kernel: PCI: Using configuration type 1 for base access Nov 12 22:51:49.889364 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 22:51:49.889371 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 22:51:49.889379 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 22:51:49.889386 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 22:51:49.889393 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 22:51:49.889400 kernel: ACPI: Added _OSI(Module Device) Nov 12 22:51:49.889410 kernel: ACPI: Added _OSI(Processor Device) Nov 12 22:51:49.889417 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 22:51:49.889425 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 22:51:49.889432 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 22:51:49.889439 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 22:51:49.889446 kernel: ACPI: Interpreter enabled Nov 12 22:51:49.889453 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 22:51:49.889460 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 22:51:49.889467 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 22:51:49.889477 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 22:51:49.889484 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 22:51:49.889491 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 22:51:49.889687 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 22:51:49.889831 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 22:51:49.889952 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 22:51:49.889962 kernel: PCI host bridge to bus 0000:00 Nov 12 22:51:49.890090 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 22:51:49.890200 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 22:51:49.890309 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 22:51:49.890429 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 22:51:49.890539 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 22:51:49.890648 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 12 22:51:49.890772 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 22:51:49.890944 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 22:51:49.891076 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 22:51:49.891217 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 12 22:51:49.891383 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 12 22:51:49.891538 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 12 22:51:49.891697 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 22:51:49.891878 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 22:51:49.892023 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 12 22:51:49.892147 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 12 22:51:49.892265 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 12 22:51:49.892404 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 22:51:49.892524 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 22:51:49.892642 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 12 22:51:49.892905 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 12 22:51:49.893033 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 22:51:49.893151 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 12 22:51:49.893268 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 12 22:51:49.893397 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 12 22:51:49.893516 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 12 22:51:49.893642 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 22:51:49.893781 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 22:51:49.893909 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 22:51:49.894028 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 12 22:51:49.894144 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 12 22:51:49.894271 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 22:51:49.894400 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 12 22:51:49.894410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 22:51:49.894422 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 22:51:49.894429 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 22:51:49.894437 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 22:51:49.894444 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 22:51:49.894451 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 22:51:49.894459 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 22:51:49.894466 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 22:51:49.894474 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 22:51:49.894481 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 22:51:49.894491 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 22:51:49.894499 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 22:51:49.894506 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 22:51:49.894514 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 22:51:49.894521 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 22:51:49.894528 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 22:51:49.894536 kernel: iommu: Default domain type: Translated Nov 12 22:51:49.894543 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 22:51:49.894550 kernel: PCI: Using ACPI for IRQ routing Nov 12 22:51:49.894560 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 22:51:49.894568 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 22:51:49.894576 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 12 22:51:49.894696 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 22:51:49.894846 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 22:51:49.894963 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 22:51:49.894973 kernel: vgaarb: loaded Nov 12 22:51:49.894980 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 22:51:49.894992 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 22:51:49.894999 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 22:51:49.895006 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 22:51:49.895015 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 22:51:49.895022 kernel: pnp: PnP ACPI init Nov 12 22:51:49.895155 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 22:51:49.895166 kernel: pnp: PnP ACPI: found 6 devices Nov 12 22:51:49.895174 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 22:51:49.895185 kernel: NET: Registered PF_INET protocol family Nov 12 22:51:49.895193 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:51:49.895200 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 22:51:49.895208 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 22:51:49.895215 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 22:51:49.895223 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 22:51:49.895230 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 22:51:49.895237 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:51:49.895245 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:51:49.895255 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 22:51:49.895263 kernel: NET: Registered PF_XDP protocol family Nov 12 22:51:49.895385 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 22:51:49.895497 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 22:51:49.895606 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 22:51:49.895714 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 22:51:49.895836 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 22:51:49.895946 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 12 22:51:49.895960 kernel: PCI: CLS 0 bytes, default 64 Nov 12 22:51:49.895967 kernel: Initialise system trusted keyrings Nov 12 22:51:49.895975 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 22:51:49.895982 kernel: Key type asymmetric registered Nov 12 22:51:49.895990 kernel: Asymmetric key parser 'x509' registered Nov 12 22:51:49.895997 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 22:51:49.896005 kernel: io scheduler mq-deadline registered Nov 12 22:51:49.896012 kernel: io scheduler kyber registered Nov 12 22:51:49.896019 kernel: io scheduler bfq registered Nov 12 22:51:49.896030 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 22:51:49.896038 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 22:51:49.896046 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 22:51:49.896053 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 22:51:49.896061 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 22:51:49.896068 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 22:51:49.896076 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 22:51:49.896083 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 22:51:49.896091 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 22:51:49.896216 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 22:51:49.896227 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 22:51:49.896347 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 22:51:49.896460 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T22:51:49 UTC (1731451909) Nov 12 22:51:49.896572 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 22:51:49.896581 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 22:51:49.896589 kernel: NET: Registered PF_INET6 protocol family Nov 12 22:51:49.896597 kernel: Segment Routing with IPv6 Nov 12 22:51:49.896608 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 22:51:49.896615 kernel: NET: Registered PF_PACKET protocol family Nov 12 22:51:49.896623 kernel: Key type dns_resolver registered Nov 12 22:51:49.896630 kernel: IPI shorthand broadcast: enabled Nov 12 22:51:49.896638 kernel: sched_clock: Marking stable (538003517, 105057603)->(685298083, -42236963) Nov 12 22:51:49.896646 kernel: registered taskstats version 1 Nov 12 22:51:49.896653 kernel: Loading compiled-in X.509 certificates Nov 12 22:51:49.896661 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: d04cb2ddbd5c3ca82936c51f5645ef0dcbdcd3b4' Nov 12 22:51:49.896668 kernel: Key type .fscrypt registered Nov 12 22:51:49.896678 kernel: Key type fscrypt-provisioning registered Nov 12 22:51:49.896686 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 22:51:49.896693 kernel: ima: Allocated hash algorithm: sha1 Nov 12 22:51:49.896701 kernel: ima: No architecture policies found Nov 12 22:51:49.896708 kernel: clk: Disabling unused clocks Nov 12 22:51:49.896715 kernel: Freeing unused kernel image (initmem) memory: 42968K Nov 12 22:51:49.896723 kernel: Write protecting the kernel read-only data: 36864k Nov 12 22:51:49.896750 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Nov 12 22:51:49.896758 kernel: Run /init as init process Nov 12 22:51:49.896768 kernel: with arguments: Nov 12 22:51:49.896775 kernel: /init Nov 12 22:51:49.896782 kernel: with environment: Nov 12 22:51:49.896790 kernel: HOME=/ Nov 12 22:51:49.896797 kernel: TERM=linux Nov 12 22:51:49.896804 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 22:51:49.896814 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:51:49.896824 systemd[1]: Detected virtualization kvm. Nov 12 22:51:49.896835 systemd[1]: Detected architecture x86-64. Nov 12 22:51:49.896843 systemd[1]: Running in initrd. Nov 12 22:51:49.896851 systemd[1]: No hostname configured, using default hostname. Nov 12 22:51:49.896858 systemd[1]: Hostname set to . Nov 12 22:51:49.896866 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:51:49.896874 systemd[1]: Queued start job for default target initrd.target. Nov 12 22:51:49.896882 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:51:49.896890 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:51:49.896902 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 22:51:49.896923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:51:49.896934 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 22:51:49.896942 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 22:51:49.896952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 22:51:49.896963 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 22:51:49.896971 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:51:49.896979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:51:49.896987 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:51:49.896995 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:51:49.897003 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:51:49.897011 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:51:49.897019 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:51:49.897030 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:51:49.897038 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:51:49.897046 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:51:49.897054 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:51:49.897062 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:51:49.897073 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:51:49.897081 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:51:49.897089 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 22:51:49.897099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:51:49.897107 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 22:51:49.897115 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 22:51:49.897123 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:51:49.897132 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:51:49.897140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:51:49.897148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 22:51:49.897156 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:51:49.897164 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 22:51:49.897176 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:51:49.897206 systemd-journald[193]: Collecting audit messages is disabled. Nov 12 22:51:49.897228 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:51:49.897237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:51:49.897245 systemd-journald[193]: Journal started Nov 12 22:51:49.897266 systemd-journald[193]: Runtime Journal (/run/log/journal/1549cbac838447729600aef18837edfd) is 6.0M, max 48.4M, 42.3M free. Nov 12 22:51:49.897835 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:51:49.903902 systemd-modules-load[195]: Inserted module 'overlay' Nov 12 22:51:49.932760 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 22:51:49.935231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:51:49.937541 kernel: Bridge firewalling registered Nov 12 22:51:49.935258 systemd-modules-load[195]: Inserted module 'br_netfilter' Nov 12 22:51:49.939595 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:51:49.942615 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:51:49.944015 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:51:49.946218 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:51:49.956477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:51:49.967524 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:51:49.969206 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:51:49.975958 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:51:49.977554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:51:49.981206 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 22:51:49.996119 dracut-cmdline[232]: dracut-dracut-053 Nov 12 22:51:49.998909 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:51:50.010477 systemd-resolved[228]: Positive Trust Anchors: Nov 12 22:51:50.010495 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:51:50.010526 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:51:50.013056 systemd-resolved[228]: Defaulting to hostname 'linux'. Nov 12 22:51:50.014162 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:51:50.023501 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:51:50.092773 kernel: SCSI subsystem initialized Nov 12 22:51:50.101754 kernel: Loading iSCSI transport class v2.0-870. Nov 12 22:51:50.112768 kernel: iscsi: registered transport (tcp) Nov 12 22:51:50.132790 kernel: iscsi: registered transport (qla4xxx) Nov 12 22:51:50.132850 kernel: QLogic iSCSI HBA Driver Nov 12 22:51:50.184076 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 22:51:50.196880 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 22:51:50.221773 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 22:51:50.221818 kernel: device-mapper: uevent: version 1.0.3 Nov 12 22:51:50.222814 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 22:51:50.266766 kernel: raid6: avx2x4 gen() 30544 MB/s Nov 12 22:51:50.283750 kernel: raid6: avx2x2 gen() 31245 MB/s Nov 12 22:51:50.300870 kernel: raid6: avx2x1 gen() 25920 MB/s Nov 12 22:51:50.300943 kernel: raid6: using algorithm avx2x2 gen() 31245 MB/s Nov 12 22:51:50.318901 kernel: raid6: .... xor() 19497 MB/s, rmw enabled Nov 12 22:51:50.318954 kernel: raid6: using avx2x2 recovery algorithm Nov 12 22:51:50.342772 kernel: xor: automatically using best checksumming function avx Nov 12 22:51:50.511771 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 22:51:50.525750 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:51:50.533901 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:51:50.550769 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 12 22:51:50.556804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:51:50.563886 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 22:51:50.576835 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Nov 12 22:51:50.609937 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:51:50.623038 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:51:50.681188 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:51:50.692980 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 22:51:50.720221 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 22:51:50.746926 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 22:51:50.746953 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 22:51:50.747202 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 22:51:50.747222 kernel: GPT:9289727 != 19775487 Nov 12 22:51:50.747241 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 22:51:50.747256 kernel: GPT:9289727 != 19775487 Nov 12 22:51:50.747273 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 22:51:50.747287 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:51:50.747312 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 22:51:50.747322 kernel: AES CTR mode by8 optimization enabled Nov 12 22:51:50.720978 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 22:51:50.723238 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:51:50.724898 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:51:50.726470 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:51:50.743212 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 22:51:50.759075 kernel: libata version 3.00 loaded. Nov 12 22:51:50.747723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:51:50.747847 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:51:50.767126 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 22:51:50.789514 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 22:51:50.789536 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 22:51:50.789705 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 22:51:50.789916 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (459) Nov 12 22:51:50.789942 kernel: BTRFS: device fsid d498af32-b44b-4318-a942-3a646ccb9d0a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (463) Nov 12 22:51:50.789958 kernel: scsi host0: ahci Nov 12 22:51:50.790155 kernel: scsi host1: ahci Nov 12 22:51:50.790354 kernel: scsi host2: ahci Nov 12 22:51:50.790542 kernel: scsi host3: ahci Nov 12 22:51:50.790741 kernel: scsi host4: ahci Nov 12 22:51:50.790942 kernel: scsi host5: ahci Nov 12 22:51:50.791126 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 12 22:51:50.791142 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 12 22:51:50.791157 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 12 22:51:50.791170 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 12 22:51:50.791183 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 12 22:51:50.791197 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 12 22:51:50.752618 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:51:50.760940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:51:50.761165 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:51:50.775120 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:51:50.786278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:51:50.797338 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:51:50.819899 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 22:51:50.842458 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 22:51:50.842726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:51:50.850141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:51:50.854010 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 22:51:50.854080 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 22:51:50.870873 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 22:51:50.872683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:51:50.880594 disk-uuid[558]: Primary Header is updated. Nov 12 22:51:50.880594 disk-uuid[558]: Secondary Entries is updated. Nov 12 22:51:50.880594 disk-uuid[558]: Secondary Header is updated. Nov 12 22:51:50.884840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:51:50.890754 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:51:50.893842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:51:51.101064 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 22:51:51.101141 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 22:51:51.101152 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 22:51:51.102760 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 22:51:51.103760 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 22:51:51.103798 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 22:51:51.104764 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 22:51:51.106080 kernel: ata3.00: applying bridge limits Nov 12 22:51:51.106097 kernel: ata3.00: configured for UDMA/100 Nov 12 22:51:51.106782 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 22:51:51.155770 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 22:51:51.169411 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 22:51:51.169434 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 22:51:51.891333 disk-uuid[559]: The operation has completed successfully. Nov 12 22:51:51.893020 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:51:51.920852 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 22:51:51.920976 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 22:51:51.945927 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 22:51:51.951164 sh[595]: Success Nov 12 22:51:51.963749 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 22:51:51.998382 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 22:51:52.012233 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 22:51:52.015632 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 22:51:52.025582 kernel: BTRFS info (device dm-0): first mount of filesystem d498af32-b44b-4318-a942-3a646ccb9d0a Nov 12 22:51:52.025623 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:51:52.025633 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 22:51:52.026594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 22:51:52.027328 kernel: BTRFS info (device dm-0): using free space tree Nov 12 22:51:52.031766 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 22:51:52.032547 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 22:51:52.046958 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 22:51:52.049721 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 22:51:52.057775 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:51:52.057819 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:51:52.057835 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:51:52.060766 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:51:52.069838 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 22:51:52.071859 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:51:52.081420 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 22:51:52.088013 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 22:51:52.148460 ignition[695]: Ignition 2.20.0 Nov 12 22:51:52.148472 ignition[695]: Stage: fetch-offline Nov 12 22:51:52.148512 ignition[695]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:51:52.148522 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:51:52.148603 ignition[695]: parsed url from cmdline: "" Nov 12 22:51:52.148607 ignition[695]: no config URL provided Nov 12 22:51:52.148612 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 22:51:52.148620 ignition[695]: no config at "/usr/lib/ignition/user.ign" Nov 12 22:51:52.148646 ignition[695]: op(1): [started] loading QEMU firmware config module Nov 12 22:51:52.148651 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 22:51:52.157992 ignition[695]: op(1): [finished] loading QEMU firmware config module Nov 12 22:51:52.164706 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:51:52.176872 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:51:52.199401 systemd-networkd[783]: lo: Link UP Nov 12 22:51:52.199411 systemd-networkd[783]: lo: Gained carrier Nov 12 22:51:52.202401 systemd-networkd[783]: Enumeration completed Nov 12 22:51:52.203328 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:51:52.206034 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:51:52.206044 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:51:52.209766 systemd[1]: Reached target network.target - Network. Nov 12 22:51:52.212626 systemd-networkd[783]: eth0: Link UP Nov 12 22:51:52.212633 systemd-networkd[783]: eth0: Gained carrier Nov 12 22:51:52.212641 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:51:52.220649 ignition[695]: parsing config with SHA512: 5ca38c3f2ab37e73318d5f1fffd9334939b2f7f43afd92d6843510fe6f69a3468f82d1292873938a45f60cc6577ddec15729267cab0a2eea4eb6891571cb9fd7 Nov 12 22:51:52.225937 unknown[695]: fetched base config from "system" Nov 12 22:51:52.225949 unknown[695]: fetched user config from "qemu" Nov 12 22:51:52.226369 ignition[695]: fetch-offline: fetch-offline passed Nov 12 22:51:52.226448 ignition[695]: Ignition finished successfully Nov 12 22:51:52.228786 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:51:52.230316 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 22:51:52.231163 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:51:52.236947 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 22:51:52.248694 ignition[786]: Ignition 2.20.0 Nov 12 22:51:52.248705 ignition[786]: Stage: kargs Nov 12 22:51:52.248907 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:51:52.248920 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:51:52.249910 ignition[786]: kargs: kargs passed Nov 12 22:51:52.253404 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 22:51:52.249963 ignition[786]: Ignition finished successfully Nov 12 22:51:52.265888 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 22:51:52.276814 ignition[795]: Ignition 2.20.0 Nov 12 22:51:52.276826 ignition[795]: Stage: disks Nov 12 22:51:52.277009 ignition[795]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:51:52.277022 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:51:52.280085 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 22:51:52.278009 ignition[795]: disks: disks passed Nov 12 22:51:52.282354 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 22:51:52.278062 ignition[795]: Ignition finished successfully Nov 12 22:51:52.284470 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:51:52.285893 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:51:52.287644 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:51:52.289960 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:51:52.308946 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 22:51:52.322593 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 22:51:52.329161 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 22:51:52.337868 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 22:51:52.438764 kernel: EXT4-fs (vda9): mounted filesystem 62325592-ead9-4e81-b706-99baa0cf9fff r/w with ordered data mode. Quota mode: none. Nov 12 22:51:52.438905 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 22:51:52.439630 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 22:51:52.457848 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:51:52.459982 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 22:51:52.461720 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 22:51:52.467091 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Nov 12 22:51:52.461782 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 22:51:52.461808 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:51:52.472366 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:51:52.472386 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:51:52.472400 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:51:52.469521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 22:51:52.475534 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 22:51:52.479752 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:51:52.481368 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:51:52.511862 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 22:51:52.517092 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Nov 12 22:51:52.521967 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 22:51:52.525059 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 22:51:52.605851 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 22:51:52.621819 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 22:51:52.624649 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 22:51:52.630750 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:51:52.648859 ignition[926]: INFO : Ignition 2.20.0 Nov 12 22:51:52.648859 ignition[926]: INFO : Stage: mount Nov 12 22:51:52.650658 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:51:52.650658 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:51:52.650011 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 22:51:52.655314 ignition[926]: INFO : mount: mount passed Nov 12 22:51:52.656138 ignition[926]: INFO : Ignition finished successfully Nov 12 22:51:52.658866 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 22:51:52.677885 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 22:51:53.024796 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 22:51:53.036962 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:51:53.044100 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Nov 12 22:51:53.044139 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:51:53.044151 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:51:53.045751 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:51:53.047755 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:51:53.049577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:51:53.068478 ignition[957]: INFO : Ignition 2.20.0 Nov 12 22:51:53.068478 ignition[957]: INFO : Stage: files Nov 12 22:51:53.070497 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:51:53.070497 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:51:53.070497 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Nov 12 22:51:53.070497 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 22:51:53.070497 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 22:51:53.077542 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 22:51:53.077542 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 22:51:53.077542 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 22:51:53.077542 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:51:53.077542 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 22:51:53.073074 unknown[957]: wrote ssh authorized keys file for user: core Nov 12 22:51:53.114285 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 22:51:53.191397 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:51:53.193937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 22:51:53.583361 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 22:51:53.923621 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:51:53.923621 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 22:51:53.927482 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:51:53.927482 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:51:53.927482 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 22:51:53.927482 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 12 22:51:53.927482 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:51:53.927482 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:51:53.927482 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 12 22:51:53.927482 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 22:51:53.948126 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:51:53.953828 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:51:53.955363 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 22:51:53.955363 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 12 22:51:53.955363 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 22:51:53.955363 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:51:53.955363 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:51:53.955363 ignition[957]: INFO : files: files passed Nov 12 22:51:53.955363 ignition[957]: INFO : Ignition finished successfully Nov 12 22:51:53.966318 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 22:51:53.978873 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 22:51:53.980575 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 22:51:53.982413 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 22:51:53.982520 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 22:51:53.989795 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 22:51:53.992042 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:51:53.992042 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:51:53.995050 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:51:53.998530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:51:53.999937 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 22:51:54.011881 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 22:51:54.032836 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 22:51:54.032951 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 22:51:54.035487 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 22:51:54.036301 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 22:51:54.038236 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 22:51:54.039076 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 22:51:54.055629 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:51:54.070851 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 22:51:54.080022 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:51:54.082384 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:51:54.084780 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 22:51:54.086597 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 22:51:54.087596 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:51:54.090118 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 22:51:54.092186 systemd[1]: Stopped target basic.target - Basic System. Nov 12 22:51:54.094030 systemd-networkd[783]: eth0: Gained IPv6LL Nov 12 22:51:54.094205 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 22:51:54.097086 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:51:54.099525 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 22:51:54.101783 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 22:51:54.103835 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:51:54.106304 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 22:51:54.108361 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 22:51:54.110382 systemd[1]: Stopped target swap.target - Swaps. Nov 12 22:51:54.111998 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 22:51:54.113096 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:51:54.115362 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:51:54.117522 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:51:54.119868 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 22:51:54.120899 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:51:54.123461 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 22:51:54.124453 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 22:51:54.126656 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 22:51:54.127717 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:51:54.130068 systemd[1]: Stopped target paths.target - Path Units. Nov 12 22:51:54.131817 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 22:51:54.135815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:51:54.138626 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 22:51:54.140493 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 22:51:54.142422 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 22:51:54.143327 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:51:54.145303 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 22:51:54.146247 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:51:54.148329 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 22:51:54.149518 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:51:54.152079 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 22:51:54.153083 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 22:51:54.172901 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 22:51:54.175146 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 22:51:54.176426 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:51:54.180589 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 22:51:54.182793 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 22:51:54.184162 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:51:54.187097 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 22:51:54.188348 ignition[1011]: INFO : Ignition 2.20.0 Nov 12 22:51:54.188348 ignition[1011]: INFO : Stage: umount Nov 12 22:51:54.188348 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:51:54.188348 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:51:54.192530 ignition[1011]: INFO : umount: umount passed Nov 12 22:51:54.192530 ignition[1011]: INFO : Ignition finished successfully Nov 12 22:51:54.188412 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:51:54.197902 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 22:51:54.198978 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 22:51:54.202889 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 22:51:54.203964 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 22:51:54.207670 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 22:51:54.209412 systemd[1]: Stopped target network.target - Network. Nov 12 22:51:54.211276 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 22:51:54.212214 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 22:51:54.214338 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 22:51:54.214390 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 22:51:54.217287 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 22:51:54.218189 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 22:51:54.220116 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 22:51:54.221109 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 22:51:54.223416 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 22:51:54.225664 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 22:51:54.227773 systemd-networkd[783]: eth0: DHCPv6 lease lost Nov 12 22:51:54.229852 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 22:51:54.230920 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 22:51:54.233335 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 22:51:54.234354 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 22:51:54.238065 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 22:51:54.238115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:51:54.249907 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 22:51:54.250941 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 22:51:54.251010 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:51:54.253187 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:51:54.253246 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:51:54.255280 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 22:51:54.255328 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 22:51:54.256547 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 22:51:54.256593 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:51:54.257116 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:51:54.268227 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 22:51:54.268353 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 22:51:54.283664 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 22:51:54.283871 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:51:54.286261 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 22:51:54.286310 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 22:51:54.288425 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 22:51:54.288467 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:51:54.290459 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 22:51:54.290522 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:51:54.292649 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 22:51:54.292697 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 22:51:54.294642 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:51:54.294689 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:51:54.309886 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 22:51:54.310994 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 22:51:54.311047 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:51:54.313336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:51:54.313386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:51:54.316859 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 22:51:54.316973 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 22:51:54.443131 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 22:51:54.443316 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 22:51:54.445846 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 22:51:54.447082 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 22:51:54.447145 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 22:51:54.458881 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 22:51:54.465723 systemd[1]: Switching root. Nov 12 22:51:54.498165 systemd-journald[193]: Journal stopped Nov 12 22:51:55.635973 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 12 22:51:55.636059 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 22:51:55.636083 kernel: SELinux: policy capability open_perms=1 Nov 12 22:51:55.636099 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 22:51:55.636114 kernel: SELinux: policy capability always_check_network=0 Nov 12 22:51:55.636128 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 22:51:55.636148 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 22:51:55.636172 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 22:51:55.636188 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 22:51:55.636203 kernel: audit: type=1403 audit(1731451914.891:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 22:51:55.636219 systemd[1]: Successfully loaded SELinux policy in 39.458ms. Nov 12 22:51:55.636250 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.021ms. Nov 12 22:51:55.636268 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:51:55.636284 systemd[1]: Detected virtualization kvm. Nov 12 22:51:55.636299 systemd[1]: Detected architecture x86-64. Nov 12 22:51:55.636317 systemd[1]: Detected first boot. Nov 12 22:51:55.636332 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:51:55.636347 zram_generator::config[1058]: No configuration found. Nov 12 22:51:55.636365 systemd[1]: Populated /etc with preset unit settings. Nov 12 22:51:55.636380 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 22:51:55.636395 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 22:51:55.636411 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 22:51:55.636427 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 22:51:55.636446 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 22:51:55.636462 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 22:51:55.636478 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 22:51:55.636493 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 22:51:55.636509 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 22:51:55.636524 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 22:51:55.636539 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 22:51:55.636554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:51:55.636570 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:51:55.636588 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 22:51:55.636604 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 22:51:55.636620 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 22:51:55.636644 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:51:55.636660 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 22:51:55.636675 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:51:55.636690 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 22:51:55.636707 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 22:51:55.636722 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 22:51:55.636830 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 22:51:55.636849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:51:55.636867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:51:55.636885 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:51:55.636901 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:51:55.636918 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 22:51:55.636935 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 22:51:55.636951 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:51:55.636974 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:51:55.636993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:51:55.637010 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 22:51:55.637027 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 22:51:55.637043 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 22:51:55.637060 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 22:51:55.637077 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:51:55.637093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 22:51:55.637110 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 22:51:55.637129 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 22:51:55.637146 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 22:51:55.637174 systemd[1]: Reached target machines.target - Containers. Nov 12 22:51:55.637190 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 22:51:55.637206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:51:55.637221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:51:55.637237 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 22:51:55.637253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:51:55.637273 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:51:55.637288 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:51:55.637304 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 22:51:55.637320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:51:55.637336 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 22:51:55.637352 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 22:51:55.637368 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 22:51:55.637384 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 22:51:55.637403 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 22:51:55.637419 kernel: fuse: init (API version 7.39) Nov 12 22:51:55.637435 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:51:55.637450 kernel: loop: module loaded Nov 12 22:51:55.637465 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:51:55.637481 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 22:51:55.637497 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 22:51:55.637515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:51:55.637530 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 22:51:55.637545 systemd[1]: Stopped verity-setup.service. Nov 12 22:51:55.637564 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:51:55.637603 systemd-journald[1132]: Collecting audit messages is disabled. Nov 12 22:51:55.637631 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 22:51:55.637651 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 22:51:55.637666 systemd-journald[1132]: Journal started Nov 12 22:51:55.637695 systemd-journald[1132]: Runtime Journal (/run/log/journal/1549cbac838447729600aef18837edfd) is 6.0M, max 48.4M, 42.3M free. Nov 12 22:51:55.415812 systemd[1]: Queued start job for default target multi-user.target. Nov 12 22:51:55.429607 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 22:51:55.430062 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 22:51:55.639765 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:51:55.641574 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 22:51:55.642769 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 22:51:55.643992 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 22:51:55.645241 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 22:51:55.646784 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:51:55.648754 kernel: ACPI: bus type drm_connector registered Nov 12 22:51:55.649224 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 22:51:55.650976 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 22:51:55.651210 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 22:51:55.652782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:51:55.653002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:51:55.654489 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:51:55.654707 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:51:55.656177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:51:55.656387 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:51:55.658008 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 22:51:55.658228 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 22:51:55.659678 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:51:55.659906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:51:55.661364 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:51:55.662844 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 22:51:55.664440 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 22:51:55.680306 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 22:51:55.693927 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 22:51:55.697157 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 22:51:55.698474 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 22:51:55.698516 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:51:55.700600 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 22:51:55.703060 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 22:51:55.706926 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 22:51:55.708245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:51:55.710893 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 22:51:55.715128 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 22:51:55.717192 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:51:55.721015 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 22:51:55.724998 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:51:55.726540 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:51:55.737150 systemd-journald[1132]: Time spent on flushing to /var/log/journal/1549cbac838447729600aef18837edfd is 22.692ms for 948 entries. Nov 12 22:51:55.737150 systemd-journald[1132]: System Journal (/var/log/journal/1549cbac838447729600aef18837edfd) is 8.0M, max 195.6M, 187.6M free. Nov 12 22:51:55.777908 systemd-journald[1132]: Received client request to flush runtime journal. Nov 12 22:51:55.777980 kernel: loop0: detected capacity change from 0 to 138184 Nov 12 22:51:55.729514 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 22:51:55.739957 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 22:51:55.743289 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 22:51:55.745193 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 22:51:55.747109 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 22:51:55.761719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:51:55.763957 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 22:51:55.769100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 22:51:55.781180 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 22:51:55.786982 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 22:51:55.789654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 22:51:55.791997 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:51:55.798764 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 22:51:55.805953 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 22:51:55.819106 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:51:55.821643 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 22:51:55.822590 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 22:51:55.826467 kernel: loop1: detected capacity change from 0 to 211296 Nov 12 22:51:55.826607 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 22:51:55.842609 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Nov 12 22:51:55.842630 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Nov 12 22:51:55.848089 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:51:55.859562 kernel: loop2: detected capacity change from 0 to 140992 Nov 12 22:51:55.901834 kernel: loop3: detected capacity change from 0 to 138184 Nov 12 22:51:55.914762 kernel: loop4: detected capacity change from 0 to 211296 Nov 12 22:51:55.923777 kernel: loop5: detected capacity change from 0 to 140992 Nov 12 22:51:55.934554 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 22:51:55.935305 (sd-merge)[1196]: Merged extensions into '/usr'. Nov 12 22:51:55.939420 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 22:51:55.939437 systemd[1]: Reloading... Nov 12 22:51:56.014758 zram_generator::config[1223]: No configuration found. Nov 12 22:51:56.069683 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 22:51:56.147744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:51:56.197329 systemd[1]: Reloading finished in 257 ms. Nov 12 22:51:56.231100 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 22:51:56.232958 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 22:51:56.244883 systemd[1]: Starting ensure-sysext.service... Nov 12 22:51:56.246898 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:51:56.256513 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Nov 12 22:51:56.256532 systemd[1]: Reloading... Nov 12 22:51:56.270899 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 22:51:56.271619 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 22:51:56.272684 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 22:51:56.273065 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Nov 12 22:51:56.273200 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Nov 12 22:51:56.277209 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:51:56.277339 systemd-tmpfiles[1260]: Skipping /boot Nov 12 22:51:56.288261 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:51:56.288393 systemd-tmpfiles[1260]: Skipping /boot Nov 12 22:51:56.335838 zram_generator::config[1291]: No configuration found. Nov 12 22:51:56.459527 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:51:56.525038 systemd[1]: Reloading finished in 268 ms. Nov 12 22:51:56.549935 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 22:51:56.566367 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:51:56.573448 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:51:56.576028 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 22:51:56.578408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 22:51:56.582252 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:51:56.586000 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:51:56.590013 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 22:51:56.597717 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 22:51:56.600272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:51:56.600776 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:51:56.605857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:51:56.609201 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:51:56.612785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:51:56.616491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:51:56.616588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:51:56.621590 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Nov 12 22:51:56.621715 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:51:56.621995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:51:56.622208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:51:56.622338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:51:56.626108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:51:56.626948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:51:56.632464 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:51:56.634560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:51:56.634821 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:51:56.636176 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 22:51:56.638270 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 22:51:56.640632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:51:56.641166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:51:56.643240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:51:56.643525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:51:56.646575 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:51:56.646799 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:51:56.648584 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:51:56.648949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:51:56.651803 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 22:51:56.661610 systemd[1]: Finished ensure-sysext.service. Nov 12 22:51:56.663818 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:51:56.671199 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 22:51:56.678420 augenrules[1377]: No rules Nov 12 22:51:56.684503 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:51:56.686395 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:51:56.706953 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:51:56.710762 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1376) Nov 12 22:51:56.708214 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:51:56.708309 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:51:56.722875 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1376) Nov 12 22:51:56.722780 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 22:51:56.725879 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 22:51:56.727352 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 22:51:56.730328 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 22:51:56.741755 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1379) Nov 12 22:51:56.748280 systemd-resolved[1329]: Positive Trust Anchors: Nov 12 22:51:56.748299 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:51:56.748332 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:51:56.749525 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 22:51:56.757280 systemd-resolved[1329]: Defaulting to hostname 'linux'. Nov 12 22:51:56.762518 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:51:56.763835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:51:56.787981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:51:56.796945 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 22:51:56.812254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 22:51:56.814618 systemd-networkd[1394]: lo: Link UP Nov 12 22:51:56.814627 systemd-networkd[1394]: lo: Gained carrier Nov 12 22:51:56.817025 systemd-networkd[1394]: Enumeration completed Nov 12 22:51:56.817125 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:51:56.817443 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:51:56.817447 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:51:56.818310 systemd-networkd[1394]: eth0: Link UP Nov 12 22:51:56.818353 systemd-networkd[1394]: eth0: Gained carrier Nov 12 22:51:56.818410 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:51:56.818505 systemd[1]: Reached target network.target - Network. Nov 12 22:51:56.829813 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:51:56.829953 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 22:51:56.833750 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 22:51:57.922720 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 22:51:57.922770 systemd-timesyncd[1396]: Initial clock synchronization to Tue 2024-11-12 22:51:57.922623 UTC. Nov 12 22:51:57.924160 systemd-resolved[1329]: Clock change detected. Flushing caches. Nov 12 22:51:57.930157 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 22:51:57.939912 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 22:51:57.940209 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 22:51:57.940934 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 22:51:57.935571 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 22:51:57.938829 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 22:51:57.946150 kernel: ACPI: button: Power Button [PWRF] Nov 12 22:51:58.014443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:51:58.016159 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 22:51:58.027214 kernel: kvm_amd: TSC scaling supported Nov 12 22:51:58.027273 kernel: kvm_amd: Nested Virtualization enabled Nov 12 22:51:58.027290 kernel: kvm_amd: Nested Paging enabled Nov 12 22:51:58.027309 kernel: kvm_amd: LBR virtualization supported Nov 12 22:51:58.028287 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 22:51:58.028303 kernel: kvm_amd: Virtual GIF supported Nov 12 22:51:58.047166 kernel: EDAC MC: Ver: 3.0.0 Nov 12 22:51:58.090738 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 22:51:58.105112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:51:58.117408 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 22:51:58.126620 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:51:58.154519 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 22:51:58.156080 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:51:58.157224 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:51:58.158399 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 22:51:58.159669 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 22:51:58.161106 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 22:51:58.162467 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 22:51:58.163721 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 22:51:58.164966 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 22:51:58.164996 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:51:58.165900 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:51:58.167663 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 22:51:58.170314 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 22:51:58.180297 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 22:51:58.183423 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 22:51:58.185144 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 22:51:58.186397 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:51:58.187403 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:51:58.188438 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:51:58.188466 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:51:58.189469 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 22:51:58.191570 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 22:51:58.194203 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:51:58.194236 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 22:51:58.198308 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 22:51:58.199356 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 22:51:58.201377 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 22:51:58.205849 jq[1430]: false Nov 12 22:51:58.207266 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 22:51:58.209956 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 22:51:58.212502 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 22:51:58.217743 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 22:51:58.219272 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 22:51:58.219729 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 22:51:58.221813 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 22:51:58.226269 dbus-daemon[1429]: [system] SELinux support is enabled Nov 12 22:51:58.233285 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 22:51:58.236795 extend-filesystems[1431]: Found loop3 Nov 12 22:51:58.236795 extend-filesystems[1431]: Found loop4 Nov 12 22:51:58.236795 extend-filesystems[1431]: Found loop5 Nov 12 22:51:58.236795 extend-filesystems[1431]: Found sr0 Nov 12 22:51:58.236795 extend-filesystems[1431]: Found vda Nov 12 22:51:58.236795 extend-filesystems[1431]: Found vda1 Nov 12 22:51:58.236795 extend-filesystems[1431]: Found vda2 Nov 12 22:51:58.236795 extend-filesystems[1431]: Found vda3 Nov 12 22:51:58.236795 extend-filesystems[1431]: Found usr Nov 12 22:51:58.236795 extend-filesystems[1431]: Found vda4 Nov 12 22:51:58.236795 extend-filesystems[1431]: Found vda6 Nov 12 22:51:58.236795 extend-filesystems[1431]: Found vda7 Nov 12 22:51:58.236602 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 22:51:58.258687 jq[1441]: true Nov 12 22:51:58.262459 extend-filesystems[1431]: Found vda9 Nov 12 22:51:58.262459 extend-filesystems[1431]: Checking size of /dev/vda9 Nov 12 22:51:58.241646 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 22:51:58.268350 update_engine[1438]: I20241112 22:51:58.252231 1438 main.cc:92] Flatcar Update Engine starting Nov 12 22:51:58.268350 update_engine[1438]: I20241112 22:51:58.259815 1438 update_check_scheduler.cc:74] Next update check in 8m48s Nov 12 22:51:58.268696 extend-filesystems[1431]: Resized partition /dev/vda9 Nov 12 22:51:58.252651 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 22:51:58.270120 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) Nov 12 22:51:58.253348 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 22:51:58.253774 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 22:51:58.254424 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 22:51:58.261633 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 22:51:58.261927 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 22:51:58.274150 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 22:51:58.275500 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 22:51:58.289333 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1370) Nov 12 22:51:58.293581 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 22:51:58.293606 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 22:51:58.300429 jq[1454]: true Nov 12 22:51:58.294640 systemd-logind[1437]: New seat seat0. Nov 12 22:51:58.300454 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 22:51:58.300956 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 22:51:58.308170 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 22:51:58.308251 tar[1451]: linux-amd64/helm Nov 12 22:51:58.312337 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 12 22:51:58.321728 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 22:51:58.327976 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 22:51:58.327976 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 22:51:58.327976 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 22:51:58.334866 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Nov 12 22:51:58.329743 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 22:51:58.329949 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 22:51:58.336701 systemd[1]: Started update-engine.service - Update Engine. Nov 12 22:51:58.346448 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 22:51:58.347641 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 22:51:58.347843 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 22:51:58.349339 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 22:51:58.349454 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 22:51:58.353505 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 22:51:58.357602 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 22:51:58.357846 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 22:51:58.360085 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Nov 12 22:51:58.369097 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 22:51:58.370762 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 22:51:58.373773 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 22:51:58.392921 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 22:51:58.393728 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 22:51:58.403547 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 22:51:58.406109 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 22:51:58.407789 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 22:51:58.482773 containerd[1458]: time="2024-11-12T22:51:58.482690764Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 12 22:51:58.506004 containerd[1458]: time="2024-11-12T22:51:58.505965781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:51:58.507606 containerd[1458]: time="2024-11-12T22:51:58.507570351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:51:58.507606 containerd[1458]: time="2024-11-12T22:51:58.507594066Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 22:51:58.507660 containerd[1458]: time="2024-11-12T22:51:58.507608733Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 22:51:58.507807 containerd[1458]: time="2024-11-12T22:51:58.507786396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 22:51:58.507828 containerd[1458]: time="2024-11-12T22:51:58.507804781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 22:51:58.507881 containerd[1458]: time="2024-11-12T22:51:58.507866917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:51:58.507901 containerd[1458]: time="2024-11-12T22:51:58.507880653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:51:58.508074 containerd[1458]: time="2024-11-12T22:51:58.508052886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:51:58.508074 containerd[1458]: time="2024-11-12T22:51:58.508069708Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 22:51:58.508117 containerd[1458]: time="2024-11-12T22:51:58.508082181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:51:58.508117 containerd[1458]: time="2024-11-12T22:51:58.508091248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 22:51:58.508206 containerd[1458]: time="2024-11-12T22:51:58.508192729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:51:58.508443 containerd[1458]: time="2024-11-12T22:51:58.508423251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:51:58.508566 containerd[1458]: time="2024-11-12T22:51:58.508546342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:51:58.508566 containerd[1458]: time="2024-11-12T22:51:58.508560859Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 22:51:58.508668 containerd[1458]: time="2024-11-12T22:51:58.508650728Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 22:51:58.508717 containerd[1458]: time="2024-11-12T22:51:58.508704839Z" level=info msg="metadata content store policy set" policy=shared Nov 12 22:51:58.515078 containerd[1458]: time="2024-11-12T22:51:58.515052365Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 22:51:58.515112 containerd[1458]: time="2024-11-12T22:51:58.515090467Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 22:51:58.515112 containerd[1458]: time="2024-11-12T22:51:58.515105325Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 22:51:58.515161 containerd[1458]: time="2024-11-12T22:51:58.515120162Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 22:51:58.515161 containerd[1458]: time="2024-11-12T22:51:58.515144909Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 22:51:58.515288 containerd[1458]: time="2024-11-12T22:51:58.515267940Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 22:51:58.515476 containerd[1458]: time="2024-11-12T22:51:58.515458056Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 22:51:58.515586 containerd[1458]: time="2024-11-12T22:51:58.515567802Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 22:51:58.515607 containerd[1458]: time="2024-11-12T22:51:58.515584423Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 22:51:58.515607 containerd[1458]: time="2024-11-12T22:51:58.515597067Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 22:51:58.515640 containerd[1458]: time="2024-11-12T22:51:58.515608899Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 22:51:58.515640 containerd[1458]: time="2024-11-12T22:51:58.515620120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 22:51:58.515640 containerd[1458]: time="2024-11-12T22:51:58.515630430Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 22:51:58.515701 containerd[1458]: time="2024-11-12T22:51:58.515641861Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 22:51:58.515701 containerd[1458]: time="2024-11-12T22:51:58.515653984Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 22:51:58.515701 containerd[1458]: time="2024-11-12T22:51:58.515665606Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 22:51:58.515701 containerd[1458]: time="2024-11-12T22:51:58.515677087Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 22:51:58.515701 containerd[1458]: time="2024-11-12T22:51:58.515686785Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 22:51:58.515792 containerd[1458]: time="2024-11-12T22:51:58.515704278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515792 containerd[1458]: time="2024-11-12T22:51:58.515716210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515792 containerd[1458]: time="2024-11-12T22:51:58.515726991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515792 containerd[1458]: time="2024-11-12T22:51:58.515738883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515792 containerd[1458]: time="2024-11-12T22:51:58.515750645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515792 containerd[1458]: time="2024-11-12T22:51:58.515764341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515792 containerd[1458]: time="2024-11-12T22:51:58.515776944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515792 containerd[1458]: time="2024-11-12T22:51:58.515789247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515931 containerd[1458]: time="2024-11-12T22:51:58.515801561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515931 containerd[1458]: time="2024-11-12T22:51:58.515819524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515931 containerd[1458]: time="2024-11-12T22:51:58.515831036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515931 containerd[1458]: time="2024-11-12T22:51:58.515841916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515931 containerd[1458]: time="2024-11-12T22:51:58.515853468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515931 containerd[1458]: time="2024-11-12T22:51:58.515866072Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 22:51:58.515931 containerd[1458]: time="2024-11-12T22:51:58.515883524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515931 containerd[1458]: time="2024-11-12T22:51:58.515894956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.515931 containerd[1458]: time="2024-11-12T22:51:58.515905576Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 22:51:58.516096 containerd[1458]: time="2024-11-12T22:51:58.515945360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 22:51:58.516096 containerd[1458]: time="2024-11-12T22:51:58.515959266Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 22:51:58.516096 containerd[1458]: time="2024-11-12T22:51:58.516017566Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 22:51:58.516096 containerd[1458]: time="2024-11-12T22:51:58.516029077Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 22:51:58.516096 containerd[1458]: time="2024-11-12T22:51:58.516038044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.516096 containerd[1458]: time="2024-11-12T22:51:58.516049596Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 22:51:58.516096 containerd[1458]: time="2024-11-12T22:51:58.516058853Z" level=info msg="NRI interface is disabled by configuration." Nov 12 22:51:58.516096 containerd[1458]: time="2024-11-12T22:51:58.516068351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 22:51:58.516381 containerd[1458]: time="2024-11-12T22:51:58.516338878Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 22:51:58.516381 containerd[1458]: time="2024-11-12T22:51:58.516381428Z" level=info msg="Connect containerd service" Nov 12 22:51:58.516527 containerd[1458]: time="2024-11-12T22:51:58.516405153Z" level=info msg="using legacy CRI server" Nov 12 22:51:58.516527 containerd[1458]: time="2024-11-12T22:51:58.516411224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 22:51:58.516527 containerd[1458]: time="2024-11-12T22:51:58.516491054Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 22:51:58.517011 containerd[1458]: time="2024-11-12T22:51:58.516988216Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:51:58.517289 containerd[1458]: time="2024-11-12T22:51:58.517270245Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 22:51:58.517331 containerd[1458]: time="2024-11-12T22:51:58.517319778Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 22:51:58.517386 containerd[1458]: time="2024-11-12T22:51:58.517363771Z" level=info msg="Start subscribing containerd event" Nov 12 22:51:58.517415 containerd[1458]: time="2024-11-12T22:51:58.517400369Z" level=info msg="Start recovering state" Nov 12 22:51:58.517459 containerd[1458]: time="2024-11-12T22:51:58.517447849Z" level=info msg="Start event monitor" Nov 12 22:51:58.517478 containerd[1458]: time="2024-11-12T22:51:58.517472375Z" level=info msg="Start snapshots syncer" Nov 12 22:51:58.517497 containerd[1458]: time="2024-11-12T22:51:58.517479788Z" level=info msg="Start cni network conf syncer for default" Nov 12 22:51:58.517497 containerd[1458]: time="2024-11-12T22:51:58.517486281Z" level=info msg="Start streaming server" Nov 12 22:51:58.517616 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 22:51:58.519189 containerd[1458]: time="2024-11-12T22:51:58.517897041Z" level=info msg="containerd successfully booted in 0.036188s" Nov 12 22:51:58.689827 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 22:51:58.692348 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:34944.service - OpenSSH per-connection server daemon (10.0.0.1:34944). Nov 12 22:51:58.700820 tar[1451]: linux-amd64/LICENSE Nov 12 22:51:58.700922 tar[1451]: linux-amd64/README.md Nov 12 22:51:58.715416 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 22:51:58.744102 sshd[1519]: Accepted publickey for core from 10.0.0.1 port 34944 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:51:58.745834 sshd-session[1519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:51:58.754427 systemd-logind[1437]: New session 1 of user core. Nov 12 22:51:58.755722 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 22:51:58.768399 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 22:51:58.784028 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 22:51:58.795367 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 22:51:58.799221 (systemd)[1526]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 22:51:58.907371 systemd[1526]: Queued start job for default target default.target. Nov 12 22:51:58.923911 systemd[1526]: Created slice app.slice - User Application Slice. Nov 12 22:51:58.923949 systemd[1526]: Reached target paths.target - Paths. Nov 12 22:51:58.923967 systemd[1526]: Reached target timers.target - Timers. Nov 12 22:51:58.925924 systemd[1526]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 22:51:58.939827 systemd[1526]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 22:51:58.940006 systemd[1526]: Reached target sockets.target - Sockets. Nov 12 22:51:58.940035 systemd[1526]: Reached target basic.target - Basic System. Nov 12 22:51:58.940087 systemd[1526]: Reached target default.target - Main User Target. Nov 12 22:51:58.940160 systemd[1526]: Startup finished in 134ms. Nov 12 22:51:58.940687 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 22:51:58.943572 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 22:51:59.006420 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:40626.service - OpenSSH per-connection server daemon (10.0.0.1:40626). Nov 12 22:51:59.048720 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 40626 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:51:59.050203 sshd-session[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:51:59.054965 systemd-logind[1437]: New session 2 of user core. Nov 12 22:51:59.065297 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 22:51:59.119027 sshd[1539]: Connection closed by 10.0.0.1 port 40626 Nov 12 22:51:59.119416 sshd-session[1537]: pam_unix(sshd:session): session closed for user core Nov 12 22:51:59.130880 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:40626.service: Deactivated successfully. Nov 12 22:51:59.132570 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 22:51:59.133233 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Nov 12 22:51:59.144412 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:40634.service - OpenSSH per-connection server daemon (10.0.0.1:40634). Nov 12 22:51:59.146340 systemd-logind[1437]: Removed session 2. Nov 12 22:51:59.179451 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 40634 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:51:59.180942 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:51:59.184926 systemd-logind[1437]: New session 3 of user core. Nov 12 22:51:59.194253 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 22:51:59.249772 sshd[1546]: Connection closed by 10.0.0.1 port 40634 Nov 12 22:51:59.250207 sshd-session[1544]: pam_unix(sshd:session): session closed for user core Nov 12 22:51:59.254564 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:40634.service: Deactivated successfully. Nov 12 22:51:59.256279 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 22:51:59.256930 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Nov 12 22:51:59.257734 systemd-logind[1437]: Removed session 3. Nov 12 22:51:59.852308 systemd-networkd[1394]: eth0: Gained IPv6LL Nov 12 22:51:59.855638 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 22:51:59.857632 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 22:51:59.871518 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 22:51:59.874307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:51:59.876565 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 22:51:59.899518 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 22:51:59.901649 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 22:51:59.901947 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 22:51:59.906323 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 22:52:00.474290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:00.475954 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 22:52:00.477320 systemd[1]: Startup finished in 672ms (kernel) + 5.197s (initrd) + 4.537s (userspace) = 10.407s. Nov 12 22:52:00.502521 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:52:00.980611 kubelet[1572]: E1112 22:52:00.980527 1572 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:52:00.985251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:52:00.985513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:52:09.260296 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:42820.service - OpenSSH per-connection server daemon (10.0.0.1:42820). Nov 12 22:52:09.295788 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 42820 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:52:09.296723 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:52:09.300700 systemd-logind[1437]: New session 4 of user core. Nov 12 22:52:09.310270 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 22:52:09.364023 sshd[1588]: Connection closed by 10.0.0.1 port 42820 Nov 12 22:52:09.364443 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Nov 12 22:52:09.375486 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:42820.service: Deactivated successfully. Nov 12 22:52:09.378217 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 22:52:09.380208 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Nov 12 22:52:09.390784 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:42826.service - OpenSSH per-connection server daemon (10.0.0.1:42826). Nov 12 22:52:09.391985 systemd-logind[1437]: Removed session 4. Nov 12 22:52:09.424084 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 42826 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:52:09.425632 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:52:09.430655 systemd-logind[1437]: New session 5 of user core. Nov 12 22:52:09.440467 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 22:52:09.489538 sshd[1595]: Connection closed by 10.0.0.1 port 42826 Nov 12 22:52:09.489830 sshd-session[1593]: pam_unix(sshd:session): session closed for user core Nov 12 22:52:09.500139 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:42826.service: Deactivated successfully. Nov 12 22:52:09.502069 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 22:52:09.503704 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Nov 12 22:52:09.505015 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:42836.service - OpenSSH per-connection server daemon (10.0.0.1:42836). Nov 12 22:52:09.505783 systemd-logind[1437]: Removed session 5. Nov 12 22:52:09.542486 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 42836 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:52:09.544411 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:52:09.548775 systemd-logind[1437]: New session 6 of user core. Nov 12 22:52:09.565254 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 22:52:09.618800 sshd[1602]: Connection closed by 10.0.0.1 port 42836 Nov 12 22:52:09.619087 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Nov 12 22:52:09.635249 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:42836.service: Deactivated successfully. Nov 12 22:52:09.637115 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 22:52:09.638724 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Nov 12 22:52:09.645638 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:42846.service - OpenSSH per-connection server daemon (10.0.0.1:42846). Nov 12 22:52:09.646693 systemd-logind[1437]: Removed session 6. Nov 12 22:52:09.677550 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 42846 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:52:09.678842 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:52:09.682537 systemd-logind[1437]: New session 7 of user core. Nov 12 22:52:09.692268 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 22:52:09.750381 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 22:52:09.750725 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:52:09.765261 sudo[1610]: pam_unix(sudo:session): session closed for user root Nov 12 22:52:09.766990 sshd[1609]: Connection closed by 10.0.0.1 port 42846 Nov 12 22:52:09.767406 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Nov 12 22:52:09.776669 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:42846.service: Deactivated successfully. Nov 12 22:52:09.778406 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 22:52:09.779708 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Nov 12 22:52:09.792490 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:42854.service - OpenSSH per-connection server daemon (10.0.0.1:42854). Nov 12 22:52:09.793462 systemd-logind[1437]: Removed session 7. Nov 12 22:52:09.823218 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 42854 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:52:09.824788 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:52:09.828275 systemd-logind[1437]: New session 8 of user core. Nov 12 22:52:09.838247 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 22:52:09.892442 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 22:52:09.892772 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:52:09.896140 sudo[1619]: pam_unix(sudo:session): session closed for user root Nov 12 22:52:09.902410 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 12 22:52:09.902752 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:52:09.924417 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:52:09.955552 augenrules[1641]: No rules Nov 12 22:52:09.957426 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:52:09.957677 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:52:09.958851 sudo[1618]: pam_unix(sudo:session): session closed for user root Nov 12 22:52:09.960380 sshd[1617]: Connection closed by 10.0.0.1 port 42854 Nov 12 22:52:09.960701 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Nov 12 22:52:09.967952 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:42854.service: Deactivated successfully. Nov 12 22:52:09.969840 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 22:52:09.971411 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Nov 12 22:52:09.985547 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:42856.service - OpenSSH per-connection server daemon (10.0.0.1:42856). Nov 12 22:52:09.986488 systemd-logind[1437]: Removed session 8. Nov 12 22:52:10.017587 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 42856 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:52:10.019245 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:52:10.023163 systemd-logind[1437]: New session 9 of user core. Nov 12 22:52:10.038386 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 22:52:10.091693 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 22:52:10.092034 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:52:10.730354 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 22:52:10.730536 (dockerd)[1672]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 22:52:11.235729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 22:52:11.246308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:11.349419 dockerd[1672]: time="2024-11-12T22:52:11.349345952Z" level=info msg="Starting up" Nov 12 22:52:11.493026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:11.498023 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:52:11.894358 dockerd[1672]: time="2024-11-12T22:52:11.894206715Z" level=info msg="Loading containers: start." Nov 12 22:52:11.908408 kubelet[1704]: E1112 22:52:11.908038 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:52:11.918523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:52:11.918779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:52:12.076161 kernel: Initializing XFRM netlink socket Nov 12 22:52:12.165152 systemd-networkd[1394]: docker0: Link UP Nov 12 22:52:12.212383 dockerd[1672]: time="2024-11-12T22:52:12.212333890Z" level=info msg="Loading containers: done." Nov 12 22:52:12.232853 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4225768011-merged.mount: Deactivated successfully. Nov 12 22:52:12.236415 dockerd[1672]: time="2024-11-12T22:52:12.236373291Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 22:52:12.236548 dockerd[1672]: time="2024-11-12T22:52:12.236513785Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 12 22:52:12.236699 dockerd[1672]: time="2024-11-12T22:52:12.236679426Z" level=info msg="Daemon has completed initialization" Nov 12 22:52:12.278663 dockerd[1672]: time="2024-11-12T22:52:12.278563057Z" level=info msg="API listen on /run/docker.sock" Nov 12 22:52:12.279043 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 22:52:13.389719 containerd[1458]: time="2024-11-12T22:52:13.389667401Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 22:52:14.157734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467447549.mount: Deactivated successfully. Nov 12 22:52:19.187436 containerd[1458]: time="2024-11-12T22:52:19.187360206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:19.204612 containerd[1458]: time="2024-11-12T22:52:19.204540521Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 22:52:19.215517 containerd[1458]: time="2024-11-12T22:52:19.215457879Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:19.236119 containerd[1458]: time="2024-11-12T22:52:19.236083477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:19.237659 containerd[1458]: time="2024-11-12T22:52:19.237410837Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 5.847695756s" Nov 12 22:52:19.237659 containerd[1458]: time="2024-11-12T22:52:19.237449740Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 22:52:19.263187 containerd[1458]: time="2024-11-12T22:52:19.263157390Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 22:52:22.084214 containerd[1458]: time="2024-11-12T22:52:22.084161046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:22.094684 containerd[1458]: time="2024-11-12T22:52:22.094632287Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 22:52:22.115981 containerd[1458]: time="2024-11-12T22:52:22.115939233Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:22.143134 containerd[1458]: time="2024-11-12T22:52:22.143101963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:22.144233 containerd[1458]: time="2024-11-12T22:52:22.144195734Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 2.88100436s" Nov 12 22:52:22.144282 containerd[1458]: time="2024-11-12T22:52:22.144229007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 22:52:22.166726 containerd[1458]: time="2024-11-12T22:52:22.166689436Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 22:52:22.168925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 22:52:22.176289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:22.315470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:22.319811 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:52:22.357178 kubelet[1975]: E1112 22:52:22.356117 1975 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:52:22.360842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:52:22.361063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:52:25.582358 containerd[1458]: time="2024-11-12T22:52:25.582285889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:25.597306 containerd[1458]: time="2024-11-12T22:52:25.597211615Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 22:52:25.609395 containerd[1458]: time="2024-11-12T22:52:25.609342259Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:25.621847 containerd[1458]: time="2024-11-12T22:52:25.621760181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:25.623081 containerd[1458]: time="2024-11-12T22:52:25.623028761Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 3.456297907s" Nov 12 22:52:25.623081 containerd[1458]: time="2024-11-12T22:52:25.623071110Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 22:52:25.647248 containerd[1458]: time="2024-11-12T22:52:25.647204628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 22:52:27.506058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1696866766.mount: Deactivated successfully. Nov 12 22:52:28.125956 containerd[1458]: time="2024-11-12T22:52:28.125874183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:28.143202 containerd[1458]: time="2024-11-12T22:52:28.143111586Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 22:52:28.158905 containerd[1458]: time="2024-11-12T22:52:28.158824779Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:28.177032 containerd[1458]: time="2024-11-12T22:52:28.176974242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:28.177613 containerd[1458]: time="2024-11-12T22:52:28.177569329Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 2.530325237s" Nov 12 22:52:28.177640 containerd[1458]: time="2024-11-12T22:52:28.177610416Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 22:52:28.203700 containerd[1458]: time="2024-11-12T22:52:28.203638998Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 22:52:30.156902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505561399.mount: Deactivated successfully. Nov 12 22:52:32.437548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 22:52:32.446287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:32.594249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:32.598698 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:52:32.640674 kubelet[2024]: E1112 22:52:32.640526 2024 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:52:32.645809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:52:32.646005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:52:34.490430 containerd[1458]: time="2024-11-12T22:52:34.490359137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:34.491317 containerd[1458]: time="2024-11-12T22:52:34.491253392Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 22:52:34.492622 containerd[1458]: time="2024-11-12T22:52:34.492581650Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:34.495490 containerd[1458]: time="2024-11-12T22:52:34.495458298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:34.496622 containerd[1458]: time="2024-11-12T22:52:34.496592834Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 6.292899344s" Nov 12 22:52:34.496664 containerd[1458]: time="2024-11-12T22:52:34.496622310Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 22:52:34.519908 containerd[1458]: time="2024-11-12T22:52:34.519864863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 22:52:35.550919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845395517.mount: Deactivated successfully. Nov 12 22:52:35.561093 containerd[1458]: time="2024-11-12T22:52:35.561021624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:35.561972 containerd[1458]: time="2024-11-12T22:52:35.561918752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 22:52:35.563656 containerd[1458]: time="2024-11-12T22:52:35.563626723Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:35.567497 containerd[1458]: time="2024-11-12T22:52:35.567433305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:35.568259 containerd[1458]: time="2024-11-12T22:52:35.568217607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.048314241s" Nov 12 22:52:35.568259 containerd[1458]: time="2024-11-12T22:52:35.568255841Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 22:52:35.590070 containerd[1458]: time="2024-11-12T22:52:35.590013967Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 22:52:36.086787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287461600.mount: Deactivated successfully. Nov 12 22:52:40.274115 containerd[1458]: time="2024-11-12T22:52:40.274049721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:40.289446 containerd[1458]: time="2024-11-12T22:52:40.289405857Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 22:52:40.332500 containerd[1458]: time="2024-11-12T22:52:40.332467518Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:40.421932 containerd[1458]: time="2024-11-12T22:52:40.421893160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:52:40.423070 containerd[1458]: time="2024-11-12T22:52:40.423043100Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.832990669s" Nov 12 22:52:40.423149 containerd[1458]: time="2024-11-12T22:52:40.423072506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 22:52:42.687476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 12 22:52:42.699316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:43.024547 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 22:52:43.024689 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 22:52:43.025091 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:43.036579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:43.053918 systemd[1]: Reloading requested from client PID 2215 ('systemctl') (unit session-9.scope)... Nov 12 22:52:43.053934 systemd[1]: Reloading... Nov 12 22:52:43.087234 update_engine[1438]: I20241112 22:52:43.087182 1438 update_attempter.cc:509] Updating boot flags... Nov 12 22:52:43.149230 zram_generator::config[2261]: No configuration found. Nov 12 22:52:44.241595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:52:44.317295 systemd[1]: Reloading finished in 1262 ms. Nov 12 22:52:44.370833 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:44.374928 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:52:44.375238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:44.376766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:44.869054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:44.874044 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:52:44.887163 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2311) Nov 12 22:52:44.932207 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:52:44.932207 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:52:44.932207 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:52:44.932606 kubelet[2308]: I1112 22:52:44.932247 2308 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:52:45.118527 kubelet[2308]: I1112 22:52:45.118489 2308 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:52:45.118527 kubelet[2308]: I1112 22:52:45.118514 2308 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:52:45.118711 kubelet[2308]: I1112 22:52:45.118696 2308 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:52:45.134117 kubelet[2308]: E1112 22:52:45.134025 2308 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:45.136790 kubelet[2308]: I1112 22:52:45.136757 2308 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:52:45.147422 kubelet[2308]: I1112 22:52:45.147401 2308 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:52:45.148221 kubelet[2308]: I1112 22:52:45.148186 2308 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:52:45.148620 kubelet[2308]: I1112 22:52:45.148593 2308 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:52:45.148710 kubelet[2308]: I1112 22:52:45.148623 2308 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:52:45.148710 kubelet[2308]: I1112 22:52:45.148634 2308 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:52:45.148760 kubelet[2308]: I1112 22:52:45.148754 2308 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:52:45.148863 kubelet[2308]: I1112 22:52:45.148848 2308 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:52:45.148892 kubelet[2308]: I1112 22:52:45.148865 2308 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:52:45.148912 kubelet[2308]: I1112 22:52:45.148892 2308 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:52:45.148912 kubelet[2308]: I1112 22:52:45.148909 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:52:45.149323 kubelet[2308]: W1112 22:52:45.149272 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:45.149360 kubelet[2308]: E1112 22:52:45.149324 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:45.149385 kubelet[2308]: W1112 22:52:45.149338 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:45.149385 kubelet[2308]: E1112 22:52:45.149380 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:45.149871 kubelet[2308]: I1112 22:52:45.149850 2308 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:52:45.152214 kubelet[2308]: I1112 22:52:45.152182 2308 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:52:45.152266 kubelet[2308]: W1112 22:52:45.152247 2308 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 22:52:45.152862 kubelet[2308]: I1112 22:52:45.152834 2308 server.go:1256] "Started kubelet" Nov 12 22:52:45.152979 kubelet[2308]: I1112 22:52:45.152948 2308 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:52:45.154183 kubelet[2308]: I1112 22:52:45.154032 2308 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:52:45.154613 kubelet[2308]: I1112 22:52:45.154301 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:52:45.156520 kubelet[2308]: I1112 22:52:45.156501 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:52:45.156703 kubelet[2308]: I1112 22:52:45.156683 2308 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:52:45.157513 kubelet[2308]: E1112 22:52:45.157237 2308 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:52:45.157513 kubelet[2308]: I1112 22:52:45.157274 2308 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:52:45.157513 kubelet[2308]: I1112 22:52:45.157338 2308 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:52:45.157513 kubelet[2308]: I1112 22:52:45.157383 2308 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:52:45.157653 kubelet[2308]: W1112 22:52:45.157589 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:45.157653 kubelet[2308]: E1112 22:52:45.157621 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:45.158352 kubelet[2308]: E1112 22:52:45.158002 2308 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:52:45.158352 kubelet[2308]: E1112 22:52:45.158026 2308 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18075a638095a105 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:52:45.152796933 +0000 UTC m=+0.273891317,LastTimestamp:2024-11-12 22:52:45.152796933 +0000 UTC m=+0.273891317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:52:45.158352 kubelet[2308]: I1112 22:52:45.158210 2308 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:52:45.158352 kubelet[2308]: I1112 22:52:45.158273 2308 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:52:45.158352 kubelet[2308]: E1112 22:52:45.158307 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" Nov 12 22:52:45.159862 kubelet[2308]: I1112 22:52:45.159835 2308 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:52:45.170897 kubelet[2308]: I1112 22:52:45.170864 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:52:45.172116 kubelet[2308]: I1112 22:52:45.172089 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:52:45.172226 kubelet[2308]: I1112 22:52:45.172120 2308 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:52:45.172226 kubelet[2308]: I1112 22:52:45.172167 2308 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:52:45.172265 kubelet[2308]: E1112 22:52:45.172231 2308 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:52:45.173495 kubelet[2308]: I1112 22:52:45.173144 2308 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:52:45.173495 kubelet[2308]: I1112 22:52:45.173160 2308 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:52:45.173495 kubelet[2308]: I1112 22:52:45.173174 2308 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:52:45.173495 kubelet[2308]: W1112 22:52:45.173312 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:45.173495 kubelet[2308]: E1112 22:52:45.173350 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:45.261010 kubelet[2308]: I1112 22:52:45.260975 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:52:45.263159 kubelet[2308]: E1112 22:52:45.263141 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Nov 12 22:52:45.272370 kubelet[2308]: E1112 22:52:45.272344 2308 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 22:52:45.291279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2320) Nov 12 22:52:45.358850 kubelet[2308]: E1112 22:52:45.358809 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" Nov 12 22:52:45.465354 kubelet[2308]: I1112 22:52:45.465304 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:52:45.465658 kubelet[2308]: E1112 22:52:45.465641 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Nov 12 22:52:45.472703 kubelet[2308]: E1112 22:52:45.472685 2308 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 22:52:45.538190 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2320) Nov 12 22:52:45.597012 kubelet[2308]: I1112 22:52:45.596974 2308 policy_none.go:49] "None policy: Start" Nov 12 22:52:45.597853 kubelet[2308]: I1112 22:52:45.597812 2308 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:52:45.597853 kubelet[2308]: I1112 22:52:45.597843 2308 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:52:45.630468 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 22:52:45.644765 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 22:52:45.647636 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 22:52:45.656068 kubelet[2308]: I1112 22:52:45.656024 2308 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:52:45.656377 kubelet[2308]: I1112 22:52:45.656326 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:52:45.657294 kubelet[2308]: E1112 22:52:45.657269 2308 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 22:52:45.759919 kubelet[2308]: E1112 22:52:45.759794 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" Nov 12 22:52:45.867622 kubelet[2308]: I1112 22:52:45.867590 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:52:45.867991 kubelet[2308]: E1112 22:52:45.867957 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Nov 12 22:52:45.873076 kubelet[2308]: I1112 22:52:45.873045 2308 topology_manager.go:215] "Topology Admit Handler" podUID="6c29179bf054d187d40ec4bbf4e563a6" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:52:45.874100 kubelet[2308]: I1112 22:52:45.874071 2308 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:52:45.874753 kubelet[2308]: I1112 22:52:45.874719 2308 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:52:45.879693 systemd[1]: Created slice kubepods-burstable-pod6c29179bf054d187d40ec4bbf4e563a6.slice - libcontainer container kubepods-burstable-pod6c29179bf054d187d40ec4bbf4e563a6.slice. Nov 12 22:52:45.891269 systemd[1]: Created slice kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice - libcontainer container kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice. Nov 12 22:52:45.894803 systemd[1]: Created slice kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice - libcontainer container kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice. Nov 12 22:52:45.961743 kubelet[2308]: I1112 22:52:45.961681 2308 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:45.961743 kubelet[2308]: I1112 22:52:45.961739 2308 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:52:45.962121 kubelet[2308]: I1112 22:52:45.961772 2308 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c29179bf054d187d40ec4bbf4e563a6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c29179bf054d187d40ec4bbf4e563a6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:52:45.962121 kubelet[2308]: I1112 22:52:45.961835 2308 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:45.962121 kubelet[2308]: I1112 22:52:45.961878 2308 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:45.962121 kubelet[2308]: I1112 22:52:45.961907 2308 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:45.962121 kubelet[2308]: I1112 22:52:45.961926 2308 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:45.962281 kubelet[2308]: I1112 22:52:45.961967 2308 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c29179bf054d187d40ec4bbf4e563a6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c29179bf054d187d40ec4bbf4e563a6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:52:45.962281 kubelet[2308]: I1112 22:52:45.962017 2308 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c29179bf054d187d40ec4bbf4e563a6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c29179bf054d187d40ec4bbf4e563a6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:52:46.002251 kubelet[2308]: W1112 22:52:46.002193 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:46.002251 kubelet[2308]: E1112 22:52:46.002244 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:46.132423 kubelet[2308]: W1112 22:52:46.132269 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:46.132423 kubelet[2308]: E1112 22:52:46.132339 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:46.180991 kubelet[2308]: W1112 22:52:46.180928 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:46.180991 kubelet[2308]: E1112 22:52:46.180988 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:46.189257 kubelet[2308]: E1112 22:52:46.189218 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:46.189963 containerd[1458]: time="2024-11-12T22:52:46.189901746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c29179bf054d187d40ec4bbf4e563a6,Namespace:kube-system,Attempt:0,}" Nov 12 22:52:46.194069 kubelet[2308]: E1112 22:52:46.194042 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:46.194399 containerd[1458]: time="2024-11-12T22:52:46.194358813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 22:52:46.197698 kubelet[2308]: E1112 22:52:46.197666 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:46.198266 containerd[1458]: time="2024-11-12T22:52:46.198236513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 22:52:46.520146 kubelet[2308]: W1112 22:52:46.520061 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:46.520146 kubelet[2308]: E1112 22:52:46.520148 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:46.560880 kubelet[2308]: E1112 22:52:46.560833 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" Nov 12 22:52:46.669686 kubelet[2308]: I1112 22:52:46.669651 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:52:46.670092 kubelet[2308]: E1112 22:52:46.670053 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Nov 12 22:52:47.209829 kubelet[2308]: E1112 22:52:47.209796 2308 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:47.761547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143833069.mount: Deactivated successfully. Nov 12 22:52:47.922069 containerd[1458]: time="2024-11-12T22:52:47.922010049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:52:47.937524 containerd[1458]: time="2024-11-12T22:52:47.937489297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:52:47.954976 containerd[1458]: time="2024-11-12T22:52:47.954914100Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 22:52:47.993283 containerd[1458]: time="2024-11-12T22:52:47.993238499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:52:48.016732 containerd[1458]: time="2024-11-12T22:52:48.016630708Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:52:48.033486 containerd[1458]: time="2024-11-12T22:52:48.033458682Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:52:48.052045 containerd[1458]: time="2024-11-12T22:52:48.052011060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:52:48.071585 containerd[1458]: time="2024-11-12T22:52:48.071548441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:52:48.072246 containerd[1458]: time="2024-11-12T22:52:48.072211135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.88217791s" Nov 12 22:52:48.094679 kubelet[2308]: W1112 22:52:48.094647 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:48.094742 kubelet[2308]: E1112 22:52:48.094683 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:48.114171 containerd[1458]: time="2024-11-12T22:52:48.114147126Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.919690036s" Nov 12 22:52:48.154625 containerd[1458]: time="2024-11-12T22:52:48.154597784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.956307469s" Nov 12 22:52:48.161740 kubelet[2308]: E1112 22:52:48.161684 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="3.2s" Nov 12 22:52:48.272154 kubelet[2308]: I1112 22:52:48.272023 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:52:48.272513 kubelet[2308]: E1112 22:52:48.272434 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Nov 12 22:52:48.479812 containerd[1458]: time="2024-11-12T22:52:48.479521679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:52:48.479812 containerd[1458]: time="2024-11-12T22:52:48.479583918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:52:48.479812 containerd[1458]: time="2024-11-12T22:52:48.479599597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:52:48.479812 containerd[1458]: time="2024-11-12T22:52:48.479683616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:52:48.480971 containerd[1458]: time="2024-11-12T22:52:48.479097246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:52:48.481051 containerd[1458]: time="2024-11-12T22:52:48.480963849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:52:48.481051 containerd[1458]: time="2024-11-12T22:52:48.480989819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:52:48.481148 containerd[1458]: time="2024-11-12T22:52:48.481064630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:52:48.507372 systemd[1]: Started cri-containerd-5aa07db9c6e26effe4c59eb3669d589bf2c2ddc09492182cbe296f5e6d3a4b48.scope - libcontainer container 5aa07db9c6e26effe4c59eb3669d589bf2c2ddc09492182cbe296f5e6d3a4b48. Nov 12 22:52:48.509035 systemd[1]: Started cri-containerd-a16ed45c9c66030ebe0491cb71c07f7340e7f39a2a4114994a39d71539ab6e40.scope - libcontainer container a16ed45c9c66030ebe0491cb71c07f7340e7f39a2a4114994a39d71539ab6e40. Nov 12 22:52:48.514360 containerd[1458]: time="2024-11-12T22:52:48.514253964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:52:48.514360 containerd[1458]: time="2024-11-12T22:52:48.514309028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:52:48.514360 containerd[1458]: time="2024-11-12T22:52:48.514323245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:52:48.514734 containerd[1458]: time="2024-11-12T22:52:48.514612843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:52:48.534266 systemd[1]: Started cri-containerd-e757c59141772e7963e1daa1b8fecad6afc7a43503ce90a1df1e073cca123920.scope - libcontainer container e757c59141772e7963e1daa1b8fecad6afc7a43503ce90a1df1e073cca123920. Nov 12 22:52:48.546040 containerd[1458]: time="2024-11-12T22:52:48.546000687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c29179bf054d187d40ec4bbf4e563a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aa07db9c6e26effe4c59eb3669d589bf2c2ddc09492182cbe296f5e6d3a4b48\"" Nov 12 22:52:48.546499 containerd[1458]: time="2024-11-12T22:52:48.546474514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a16ed45c9c66030ebe0491cb71c07f7340e7f39a2a4114994a39d71539ab6e40\"" Nov 12 22:52:48.549046 kubelet[2308]: E1112 22:52:48.548943 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:48.549046 kubelet[2308]: E1112 22:52:48.548965 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:48.551647 containerd[1458]: time="2024-11-12T22:52:48.551624171Z" level=info msg="CreateContainer within sandbox \"a16ed45c9c66030ebe0491cb71c07f7340e7f39a2a4114994a39d71539ab6e40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 22:52:48.551811 containerd[1458]: time="2024-11-12T22:52:48.551642937Z" level=info msg="CreateContainer within sandbox \"5aa07db9c6e26effe4c59eb3669d589bf2c2ddc09492182cbe296f5e6d3a4b48\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 22:52:48.562612 kubelet[2308]: W1112 22:52:48.562576 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:48.562612 kubelet[2308]: E1112 22:52:48.562611 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:48.569918 containerd[1458]: time="2024-11-12T22:52:48.569827719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e757c59141772e7963e1daa1b8fecad6afc7a43503ce90a1df1e073cca123920\"" Nov 12 22:52:48.570546 kubelet[2308]: E1112 22:52:48.570529 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:48.571946 containerd[1458]: time="2024-11-12T22:52:48.571919378Z" level=info msg="CreateContainer within sandbox \"e757c59141772e7963e1daa1b8fecad6afc7a43503ce90a1df1e073cca123920\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 22:52:48.589541 kubelet[2308]: W1112 22:52:48.589521 2308 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:48.589592 kubelet[2308]: E1112 22:52:48.589549 2308 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Nov 12 22:52:49.200381 containerd[1458]: time="2024-11-12T22:52:49.200337952Z" level=info msg="CreateContainer within sandbox \"5aa07db9c6e26effe4c59eb3669d589bf2c2ddc09492182cbe296f5e6d3a4b48\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3dd79c5cedcdcfcba375d9c3a48587df547ff698003367ef216032433928afc4\"" Nov 12 22:52:49.200980 containerd[1458]: time="2024-11-12T22:52:49.200922398Z" level=info msg="StartContainer for \"3dd79c5cedcdcfcba375d9c3a48587df547ff698003367ef216032433928afc4\"" Nov 12 22:52:49.228269 systemd[1]: Started cri-containerd-3dd79c5cedcdcfcba375d9c3a48587df547ff698003367ef216032433928afc4.scope - libcontainer container 3dd79c5cedcdcfcba375d9c3a48587df547ff698003367ef216032433928afc4. Nov 12 22:52:49.304571 containerd[1458]: time="2024-11-12T22:52:49.304507271Z" level=info msg="CreateContainer within sandbox \"a16ed45c9c66030ebe0491cb71c07f7340e7f39a2a4114994a39d71539ab6e40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9b8c08072b393f0a33ce22376255e0241942fc4a94c620c66ff3e48ccb6f2fe\"" Nov 12 22:52:49.304733 containerd[1458]: time="2024-11-12T22:52:49.304634502Z" level=info msg="CreateContainer within sandbox \"e757c59141772e7963e1daa1b8fecad6afc7a43503ce90a1df1e073cca123920\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"633618ed477b56f32a64441237c66c07f3837fff54e271cada7b49b193c35094\"" Nov 12 22:52:49.304733 containerd[1458]: time="2024-11-12T22:52:49.304660631Z" level=info msg="StartContainer for \"3dd79c5cedcdcfcba375d9c3a48587df547ff698003367ef216032433928afc4\" returns successfully" Nov 12 22:52:49.305510 containerd[1458]: time="2024-11-12T22:52:49.305450916Z" level=info msg="StartContainer for \"f9b8c08072b393f0a33ce22376255e0241942fc4a94c620c66ff3e48ccb6f2fe\"" Nov 12 22:52:49.306609 containerd[1458]: time="2024-11-12T22:52:49.305615668Z" level=info msg="StartContainer for \"633618ed477b56f32a64441237c66c07f3837fff54e271cada7b49b193c35094\"" Nov 12 22:52:49.333298 systemd[1]: Started cri-containerd-633618ed477b56f32a64441237c66c07f3837fff54e271cada7b49b193c35094.scope - libcontainer container 633618ed477b56f32a64441237c66c07f3837fff54e271cada7b49b193c35094. Nov 12 22:52:49.336805 systemd[1]: Started cri-containerd-f9b8c08072b393f0a33ce22376255e0241942fc4a94c620c66ff3e48ccb6f2fe.scope - libcontainer container f9b8c08072b393f0a33ce22376255e0241942fc4a94c620c66ff3e48ccb6f2fe. Nov 12 22:52:49.457559 containerd[1458]: time="2024-11-12T22:52:49.457446123Z" level=info msg="StartContainer for \"633618ed477b56f32a64441237c66c07f3837fff54e271cada7b49b193c35094\" returns successfully" Nov 12 22:52:49.458151 containerd[1458]: time="2024-11-12T22:52:49.457907606Z" level=info msg="StartContainer for \"f9b8c08072b393f0a33ce22376255e0241942fc4a94c620c66ff3e48ccb6f2fe\" returns successfully" Nov 12 22:52:50.185706 kubelet[2308]: E1112 22:52:50.185237 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:50.186727 kubelet[2308]: E1112 22:52:50.186699 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:50.188993 kubelet[2308]: E1112 22:52:50.188969 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:51.140307 kubelet[2308]: E1112 22:52:51.140259 2308 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 22:52:51.190240 kubelet[2308]: E1112 22:52:51.190199 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:51.190240 kubelet[2308]: E1112 22:52:51.190197 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:51.190725 kubelet[2308]: E1112 22:52:51.190281 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:51.375037 kubelet[2308]: E1112 22:52:51.375001 2308 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 22:52:51.474594 kubelet[2308]: I1112 22:52:51.474556 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:52:51.534338 kubelet[2308]: I1112 22:52:51.534297 2308 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:52:52.153522 kubelet[2308]: I1112 22:52:52.153484 2308 apiserver.go:52] "Watching apiserver" Nov 12 22:52:52.157670 kubelet[2308]: I1112 22:52:52.157597 2308 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:52:52.219983 kubelet[2308]: E1112 22:52:52.219937 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:52.918174 kubelet[2308]: E1112 22:52:52.918111 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:53.191495 kubelet[2308]: E1112 22:52:53.191470 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:53.191864 kubelet[2308]: E1112 22:52:53.191837 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:55.234874 kubelet[2308]: I1112 22:52:55.234731 2308 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.234691609 podStartE2EDuration="3.234691609s" podCreationTimestamp="2024-11-12 22:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:52:55.206598828 +0000 UTC m=+10.327693212" watchObservedRunningTime="2024-11-12 22:52:55.234691609 +0000 UTC m=+10.355785993" Nov 12 22:52:56.240962 systemd[1]: Reloading requested from client PID 2599 ('systemctl') (unit session-9.scope)... Nov 12 22:52:56.240979 systemd[1]: Reloading... Nov 12 22:52:56.320163 zram_generator::config[2641]: No configuration found. Nov 12 22:52:56.425424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:52:56.519314 systemd[1]: Reloading finished in 277 ms. Nov 12 22:52:56.567407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:56.584528 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:52:56.584816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:56.592486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:56.726862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:56.731717 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:52:56.772964 kubelet[2683]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:52:56.772964 kubelet[2683]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:52:56.772964 kubelet[2683]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:52:56.773318 kubelet[2683]: I1112 22:52:56.772942 2683 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:52:56.777737 kubelet[2683]: I1112 22:52:56.777698 2683 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:52:56.777737 kubelet[2683]: I1112 22:52:56.777724 2683 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:52:56.777951 kubelet[2683]: I1112 22:52:56.777930 2683 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:52:56.779436 kubelet[2683]: I1112 22:52:56.779417 2683 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 22:52:56.781493 kubelet[2683]: I1112 22:52:56.781458 2683 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:52:56.791647 kubelet[2683]: I1112 22:52:56.791615 2683 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:52:56.791872 kubelet[2683]: I1112 22:52:56.791851 2683 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:52:56.792064 kubelet[2683]: I1112 22:52:56.792044 2683 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:52:56.792153 kubelet[2683]: I1112 22:52:56.792071 2683 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:52:56.792153 kubelet[2683]: I1112 22:52:56.792081 2683 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:52:56.792153 kubelet[2683]: I1112 22:52:56.792112 2683 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:52:56.792229 kubelet[2683]: I1112 22:52:56.792213 2683 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:52:56.792229 kubelet[2683]: I1112 22:52:56.792226 2683 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:52:56.792270 kubelet[2683]: I1112 22:52:56.792254 2683 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:52:56.792294 kubelet[2683]: I1112 22:52:56.792271 2683 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:52:56.793314 kubelet[2683]: I1112 22:52:56.793291 2683 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:52:56.795142 kubelet[2683]: I1112 22:52:56.793498 2683 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:52:56.795142 kubelet[2683]: I1112 22:52:56.794007 2683 server.go:1256] "Started kubelet" Nov 12 22:52:56.795142 kubelet[2683]: I1112 22:52:56.794286 2683 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:52:56.795142 kubelet[2683]: I1112 22:52:56.794359 2683 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:52:56.795599 kubelet[2683]: I1112 22:52:56.795577 2683 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:52:56.797315 kubelet[2683]: I1112 22:52:56.797299 2683 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:52:56.802308 kubelet[2683]: I1112 22:52:56.802269 2683 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:52:56.803498 kubelet[2683]: E1112 22:52:56.803471 2683 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:52:56.804083 kubelet[2683]: I1112 22:52:56.804054 2683 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:52:56.804233 kubelet[2683]: I1112 22:52:56.804197 2683 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:52:56.804447 kubelet[2683]: I1112 22:52:56.804409 2683 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:52:56.805224 kubelet[2683]: I1112 22:52:56.805205 2683 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:52:56.805301 kubelet[2683]: I1112 22:52:56.805282 2683 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:52:56.808008 kubelet[2683]: I1112 22:52:56.807988 2683 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:52:56.814857 kubelet[2683]: I1112 22:52:56.814837 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:52:56.816442 kubelet[2683]: I1112 22:52:56.816408 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:52:56.816489 kubelet[2683]: I1112 22:52:56.816483 2683 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:52:56.816515 kubelet[2683]: I1112 22:52:56.816504 2683 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:52:56.816584 kubelet[2683]: E1112 22:52:56.816569 2683 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:52:56.837296 kubelet[2683]: I1112 22:52:56.837270 2683 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:52:56.837296 kubelet[2683]: I1112 22:52:56.837292 2683 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:52:56.837390 kubelet[2683]: I1112 22:52:56.837307 2683 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:52:56.837453 kubelet[2683]: I1112 22:52:56.837435 2683 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 22:52:56.837482 kubelet[2683]: I1112 22:52:56.837460 2683 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 22:52:56.837482 kubelet[2683]: I1112 22:52:56.837467 2683 policy_none.go:49] "None policy: Start" Nov 12 22:52:56.838034 kubelet[2683]: I1112 22:52:56.838014 2683 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:52:56.838034 kubelet[2683]: I1112 22:52:56.838035 2683 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:52:56.838204 kubelet[2683]: I1112 22:52:56.838187 2683 state_mem.go:75] "Updated machine memory state" Nov 12 22:52:56.842024 kubelet[2683]: I1112 22:52:56.842005 2683 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:52:56.842615 kubelet[2683]: I1112 22:52:56.842591 2683 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:52:56.917719 kubelet[2683]: I1112 22:52:56.917666 2683 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:52:56.917866 kubelet[2683]: I1112 22:52:56.917771 2683 topology_manager.go:215] "Topology Admit Handler" podUID="6c29179bf054d187d40ec4bbf4e563a6" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:52:56.917866 kubelet[2683]: I1112 22:52:56.917820 2683 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:52:56.947648 kubelet[2683]: I1112 22:52:56.947620 2683 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:52:57.006281 kubelet[2683]: I1112 22:52:57.006240 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:57.006281 kubelet[2683]: I1112 22:52:57.006276 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:52:57.006281 kubelet[2683]: I1112 22:52:57.006294 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c29179bf054d187d40ec4bbf4e563a6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c29179bf054d187d40ec4bbf4e563a6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:52:57.006469 kubelet[2683]: I1112 22:52:57.006313 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:57.006469 kubelet[2683]: I1112 22:52:57.006335 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:57.006469 kubelet[2683]: I1112 22:52:57.006439 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:57.006531 kubelet[2683]: I1112 22:52:57.006481 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:57.006899 kubelet[2683]: I1112 22:52:57.006569 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c29179bf054d187d40ec4bbf4e563a6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c29179bf054d187d40ec4bbf4e563a6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:52:57.006899 kubelet[2683]: I1112 22:52:57.006644 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c29179bf054d187d40ec4bbf4e563a6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c29179bf054d187d40ec4bbf4e563a6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:52:57.007251 kubelet[2683]: E1112 22:52:57.007162 2683 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 22:52:57.021475 kubelet[2683]: E1112 22:52:57.021435 2683 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 22:52:57.065303 kubelet[2683]: I1112 22:52:57.065176 2683 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 22:52:57.065303 kubelet[2683]: I1112 22:52:57.065262 2683 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:52:57.237965 kubelet[2683]: E1112 22:52:57.237928 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:57.308533 kubelet[2683]: E1112 22:52:57.308491 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:57.322191 kubelet[2683]: E1112 22:52:57.322073 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:57.793451 kubelet[2683]: I1112 22:52:57.793411 2683 apiserver.go:52] "Watching apiserver" Nov 12 22:52:57.804606 kubelet[2683]: I1112 22:52:57.804559 2683 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:52:57.825841 kubelet[2683]: E1112 22:52:57.825804 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:57.825900 kubelet[2683]: E1112 22:52:57.825866 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:57.826582 kubelet[2683]: E1112 22:52:57.826566 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:57.915284 kubelet[2683]: I1112 22:52:57.915243 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.915197834 podStartE2EDuration="1.915197834s" podCreationTimestamp="2024-11-12 22:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:52:57.914982488 +0000 UTC m=+1.178706042" watchObservedRunningTime="2024-11-12 22:52:57.915197834 +0000 UTC m=+1.178921388" Nov 12 22:52:58.827293 kubelet[2683]: E1112 22:52:58.827258 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:58.827860 kubelet[2683]: E1112 22:52:58.827437 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:52:59.828968 kubelet[2683]: E1112 22:52:59.828819 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:00.831201 kubelet[2683]: E1112 22:53:00.831172 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:01.832539 kubelet[2683]: E1112 22:53:01.832509 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:02.087791 sudo[1652]: pam_unix(sudo:session): session closed for user root Nov 12 22:53:02.089060 sshd[1651]: Connection closed by 10.0.0.1 port 42856 Nov 12 22:53:02.089492 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:02.092598 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:42856.service: Deactivated successfully. Nov 12 22:53:02.094558 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 22:53:02.094748 systemd[1]: session-9.scope: Consumed 5.295s CPU time, 191.0M memory peak, 0B memory swap peak. Nov 12 22:53:02.096319 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Nov 12 22:53:02.097346 systemd-logind[1437]: Removed session 9. Nov 12 22:53:02.942251 kubelet[2683]: E1112 22:53:02.942212 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:03.835048 kubelet[2683]: E1112 22:53:03.834997 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:08.822892 kubelet[2683]: E1112 22:53:08.822850 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:08.963617 kubelet[2683]: I1112 22:53:08.963587 2683 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 22:53:08.964003 containerd[1458]: time="2024-11-12T22:53:08.963963290Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 22:53:08.964380 kubelet[2683]: I1112 22:53:08.964177 2683 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 22:53:09.529086 kubelet[2683]: I1112 22:53:09.529032 2683 topology_manager.go:215] "Topology Admit Handler" podUID="231e3e9e-9ae1-41e6-a774-18d897cf24a2" podNamespace="kube-system" podName="kube-proxy-khph4" Nov 12 22:53:09.535514 systemd[1]: Created slice kubepods-besteffort-pod231e3e9e_9ae1_41e6_a774_18d897cf24a2.slice - libcontainer container kubepods-besteffort-pod231e3e9e_9ae1_41e6_a774_18d897cf24a2.slice. Nov 12 22:53:09.578004 kubelet[2683]: I1112 22:53:09.577565 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/231e3e9e-9ae1-41e6-a774-18d897cf24a2-lib-modules\") pod \"kube-proxy-khph4\" (UID: \"231e3e9e-9ae1-41e6-a774-18d897cf24a2\") " pod="kube-system/kube-proxy-khph4" Nov 12 22:53:09.578004 kubelet[2683]: I1112 22:53:09.577752 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/231e3e9e-9ae1-41e6-a774-18d897cf24a2-xtables-lock\") pod \"kube-proxy-khph4\" (UID: \"231e3e9e-9ae1-41e6-a774-18d897cf24a2\") " pod="kube-system/kube-proxy-khph4" Nov 12 22:53:09.578607 kubelet[2683]: I1112 22:53:09.578396 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/231e3e9e-9ae1-41e6-a774-18d897cf24a2-kube-proxy\") pod \"kube-proxy-khph4\" (UID: \"231e3e9e-9ae1-41e6-a774-18d897cf24a2\") " pod="kube-system/kube-proxy-khph4" Nov 12 22:53:09.578659 kubelet[2683]: I1112 22:53:09.578621 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqms7\" (UniqueName: \"kubernetes.io/projected/231e3e9e-9ae1-41e6-a774-18d897cf24a2-kube-api-access-kqms7\") pod \"kube-proxy-khph4\" (UID: \"231e3e9e-9ae1-41e6-a774-18d897cf24a2\") " pod="kube-system/kube-proxy-khph4" Nov 12 22:53:09.736836 kubelet[2683]: E1112 22:53:09.736791 2683 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 22:53:09.736836 kubelet[2683]: E1112 22:53:09.736829 2683 projected.go:200] Error preparing data for projected volume kube-api-access-kqms7 for pod kube-system/kube-proxy-khph4: configmap "kube-root-ca.crt" not found Nov 12 22:53:09.737025 kubelet[2683]: E1112 22:53:09.736887 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/231e3e9e-9ae1-41e6-a774-18d897cf24a2-kube-api-access-kqms7 podName:231e3e9e-9ae1-41e6-a774-18d897cf24a2 nodeName:}" failed. No retries permitted until 2024-11-12 22:53:10.23686627 +0000 UTC m=+13.500589824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kqms7" (UniqueName: "kubernetes.io/projected/231e3e9e-9ae1-41e6-a774-18d897cf24a2-kube-api-access-kqms7") pod "kube-proxy-khph4" (UID: "231e3e9e-9ae1-41e6-a774-18d897cf24a2") : configmap "kube-root-ca.crt" not found Nov 12 22:53:10.251502 kubelet[2683]: I1112 22:53:10.251452 2683 topology_manager.go:215] "Topology Admit Handler" podUID="b0b97a45-2a39-4946-b96a-813875de8993" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-6zpt8" Nov 12 22:53:10.257676 systemd[1]: Created slice kubepods-besteffort-podb0b97a45_2a39_4946_b96a_813875de8993.slice - libcontainer container kubepods-besteffort-podb0b97a45_2a39_4946_b96a_813875de8993.slice. Nov 12 22:53:10.282900 kubelet[2683]: I1112 22:53:10.282862 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0b97a45-2a39-4946-b96a-813875de8993-var-lib-calico\") pod \"tigera-operator-56b74f76df-6zpt8\" (UID: \"b0b97a45-2a39-4946-b96a-813875de8993\") " pod="tigera-operator/tigera-operator-56b74f76df-6zpt8" Nov 12 22:53:10.282900 kubelet[2683]: I1112 22:53:10.282905 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkxm\" (UniqueName: \"kubernetes.io/projected/b0b97a45-2a39-4946-b96a-813875de8993-kube-api-access-9pkxm\") pod \"tigera-operator-56b74f76df-6zpt8\" (UID: \"b0b97a45-2a39-4946-b96a-813875de8993\") " pod="tigera-operator/tigera-operator-56b74f76df-6zpt8" Nov 12 22:53:10.443506 kubelet[2683]: E1112 22:53:10.443472 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:10.444015 containerd[1458]: time="2024-11-12T22:53:10.443949374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-khph4,Uid:231e3e9e-9ae1-41e6-a774-18d897cf24a2,Namespace:kube-system,Attempt:0,}" Nov 12 22:53:10.561113 containerd[1458]: time="2024-11-12T22:53:10.560984624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-6zpt8,Uid:b0b97a45-2a39-4946-b96a-813875de8993,Namespace:tigera-operator,Attempt:0,}" Nov 12 22:53:10.789203 containerd[1458]: time="2024-11-12T22:53:10.789089791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:10.789203 containerd[1458]: time="2024-11-12T22:53:10.789184089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:10.789203 containerd[1458]: time="2024-11-12T22:53:10.789201441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:10.789359 containerd[1458]: time="2024-11-12T22:53:10.789300106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:10.812277 systemd[1]: Started cri-containerd-3ee7408377d7145c6517156a776f2050103efc158b3f469f1a954200c7fc7d71.scope - libcontainer container 3ee7408377d7145c6517156a776f2050103efc158b3f469f1a954200c7fc7d71. Nov 12 22:53:10.835904 containerd[1458]: time="2024-11-12T22:53:10.835866154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-khph4,Uid:231e3e9e-9ae1-41e6-a774-18d897cf24a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ee7408377d7145c6517156a776f2050103efc158b3f469f1a954200c7fc7d71\"" Nov 12 22:53:10.836632 kubelet[2683]: E1112 22:53:10.836612 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:10.838602 containerd[1458]: time="2024-11-12T22:53:10.838560089Z" level=info msg="CreateContainer within sandbox \"3ee7408377d7145c6517156a776f2050103efc158b3f469f1a954200c7fc7d71\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 22:53:11.063681 containerd[1458]: time="2024-11-12T22:53:11.063347103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:11.063681 containerd[1458]: time="2024-11-12T22:53:11.063410793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:11.063681 containerd[1458]: time="2024-11-12T22:53:11.063421483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:11.063681 containerd[1458]: time="2024-11-12T22:53:11.063498107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:11.083289 systemd[1]: Started cri-containerd-ee89a1f64e029f806e6b6f46f0aa10dccb78bfd62c4cc3b6c8f1351b0ab8b831.scope - libcontainer container ee89a1f64e029f806e6b6f46f0aa10dccb78bfd62c4cc3b6c8f1351b0ab8b831. Nov 12 22:53:11.124299 containerd[1458]: time="2024-11-12T22:53:11.124233897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-6zpt8,Uid:b0b97a45-2a39-4946-b96a-813875de8993,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ee89a1f64e029f806e6b6f46f0aa10dccb78bfd62c4cc3b6c8f1351b0ab8b831\"" Nov 12 22:53:11.126313 containerd[1458]: time="2024-11-12T22:53:11.126270805Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 22:53:11.242097 containerd[1458]: time="2024-11-12T22:53:11.242044223Z" level=info msg="CreateContainer within sandbox \"3ee7408377d7145c6517156a776f2050103efc158b3f469f1a954200c7fc7d71\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea9bad392196d3f93dedc1dc12a422e595bc8231a867fbd5fb7721a22667802a\"" Nov 12 22:53:11.242549 containerd[1458]: time="2024-11-12T22:53:11.242524867Z" level=info msg="StartContainer for \"ea9bad392196d3f93dedc1dc12a422e595bc8231a867fbd5fb7721a22667802a\"" Nov 12 22:53:11.270255 systemd[1]: Started cri-containerd-ea9bad392196d3f93dedc1dc12a422e595bc8231a867fbd5fb7721a22667802a.scope - libcontainer container ea9bad392196d3f93dedc1dc12a422e595bc8231a867fbd5fb7721a22667802a. Nov 12 22:53:11.344912 containerd[1458]: time="2024-11-12T22:53:11.344800315Z" level=info msg="StartContainer for \"ea9bad392196d3f93dedc1dc12a422e595bc8231a867fbd5fb7721a22667802a\" returns successfully" Nov 12 22:53:11.847569 kubelet[2683]: E1112 22:53:11.847539 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:12.850066 kubelet[2683]: E1112 22:53:12.850023 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:13.945628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039634741.mount: Deactivated successfully. Nov 12 22:53:15.451830 containerd[1458]: time="2024-11-12T22:53:15.451743854Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:15.469614 containerd[1458]: time="2024-11-12T22:53:15.469553314Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763355" Nov 12 22:53:15.491315 containerd[1458]: time="2024-11-12T22:53:15.491241980Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:15.531949 containerd[1458]: time="2024-11-12T22:53:15.531873576Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:15.532845 containerd[1458]: time="2024-11-12T22:53:15.532791581Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 4.406477173s" Nov 12 22:53:15.532845 containerd[1458]: time="2024-11-12T22:53:15.532830023Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 22:53:15.534600 containerd[1458]: time="2024-11-12T22:53:15.534561546Z" level=info msg="CreateContainer within sandbox \"ee89a1f64e029f806e6b6f46f0aa10dccb78bfd62c4cc3b6c8f1351b0ab8b831\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 22:53:15.731720 containerd[1458]: time="2024-11-12T22:53:15.731582613Z" level=info msg="CreateContainer within sandbox \"ee89a1f64e029f806e6b6f46f0aa10dccb78bfd62c4cc3b6c8f1351b0ab8b831\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eb5f4a60933e04c0077f1cb71b062ede166c7f2f7fc7a4b101588f0393a61846\"" Nov 12 22:53:15.732416 containerd[1458]: time="2024-11-12T22:53:15.732067875Z" level=info msg="StartContainer for \"eb5f4a60933e04c0077f1cb71b062ede166c7f2f7fc7a4b101588f0393a61846\"" Nov 12 22:53:15.762316 systemd[1]: Started cri-containerd-eb5f4a60933e04c0077f1cb71b062ede166c7f2f7fc7a4b101588f0393a61846.scope - libcontainer container eb5f4a60933e04c0077f1cb71b062ede166c7f2f7fc7a4b101588f0393a61846. Nov 12 22:53:15.807635 containerd[1458]: time="2024-11-12T22:53:15.807566836Z" level=info msg="StartContainer for \"eb5f4a60933e04c0077f1cb71b062ede166c7f2f7fc7a4b101588f0393a61846\" returns successfully" Nov 12 22:53:15.887011 kubelet[2683]: I1112 22:53:15.886956 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-khph4" podStartSLOduration=6.88690461 podStartE2EDuration="6.88690461s" podCreationTimestamp="2024-11-12 22:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:53:11.885014265 +0000 UTC m=+15.148737819" watchObservedRunningTime="2024-11-12 22:53:15.88690461 +0000 UTC m=+19.150628164" Nov 12 22:53:19.259201 kubelet[2683]: I1112 22:53:19.259156 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-6zpt8" podStartSLOduration=4.851527338 podStartE2EDuration="9.259092216s" podCreationTimestamp="2024-11-12 22:53:10 +0000 UTC" firstStartedPulling="2024-11-12 22:53:11.125673833 +0000 UTC m=+14.389397377" lastFinishedPulling="2024-11-12 22:53:15.533238701 +0000 UTC m=+18.796962255" observedRunningTime="2024-11-12 22:53:15.886892457 +0000 UTC m=+19.150616021" watchObservedRunningTime="2024-11-12 22:53:19.259092216 +0000 UTC m=+22.522815770" Nov 12 22:53:19.260065 kubelet[2683]: I1112 22:53:19.259364 2683 topology_manager.go:215] "Topology Admit Handler" podUID="966c8acf-bc1d-4b97-8f63-ebd60ade7023" podNamespace="calico-system" podName="calico-typha-984fdf767-jwvm9" Nov 12 22:53:19.267734 systemd[1]: Created slice kubepods-besteffort-pod966c8acf_bc1d_4b97_8f63_ebd60ade7023.slice - libcontainer container kubepods-besteffort-pod966c8acf_bc1d_4b97_8f63_ebd60ade7023.slice. Nov 12 22:53:19.344589 kubelet[2683]: I1112 22:53:19.344552 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/966c8acf-bc1d-4b97-8f63-ebd60ade7023-tigera-ca-bundle\") pod \"calico-typha-984fdf767-jwvm9\" (UID: \"966c8acf-bc1d-4b97-8f63-ebd60ade7023\") " pod="calico-system/calico-typha-984fdf767-jwvm9" Nov 12 22:53:19.344589 kubelet[2683]: I1112 22:53:19.344593 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/966c8acf-bc1d-4b97-8f63-ebd60ade7023-typha-certs\") pod \"calico-typha-984fdf767-jwvm9\" (UID: \"966c8acf-bc1d-4b97-8f63-ebd60ade7023\") " pod="calico-system/calico-typha-984fdf767-jwvm9" Nov 12 22:53:19.344769 kubelet[2683]: I1112 22:53:19.344614 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr99g\" (UniqueName: \"kubernetes.io/projected/966c8acf-bc1d-4b97-8f63-ebd60ade7023-kube-api-access-fr99g\") pod \"calico-typha-984fdf767-jwvm9\" (UID: \"966c8acf-bc1d-4b97-8f63-ebd60ade7023\") " pod="calico-system/calico-typha-984fdf767-jwvm9" Nov 12 22:53:19.484953 kubelet[2683]: I1112 22:53:19.484538 2683 topology_manager.go:215] "Topology Admit Handler" podUID="60d50491-0211-4e93-94e9-c7d4066c62b3" podNamespace="calico-system" podName="calico-node-zws85" Nov 12 22:53:19.491189 systemd[1]: Created slice kubepods-besteffort-pod60d50491_0211_4e93_94e9_c7d4066c62b3.slice - libcontainer container kubepods-besteffort-pod60d50491_0211_4e93_94e9_c7d4066c62b3.slice. Nov 12 22:53:19.546427 kubelet[2683]: I1112 22:53:19.546278 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60d50491-0211-4e93-94e9-c7d4066c62b3-xtables-lock\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546427 kubelet[2683]: I1112 22:53:19.546320 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/60d50491-0211-4e93-94e9-c7d4066c62b3-policysync\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546427 kubelet[2683]: I1112 22:53:19.546341 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/60d50491-0211-4e93-94e9-c7d4066c62b3-var-run-calico\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546427 kubelet[2683]: I1112 22:53:19.546363 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/60d50491-0211-4e93-94e9-c7d4066c62b3-cni-log-dir\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546670 kubelet[2683]: I1112 22:53:19.546501 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d50491-0211-4e93-94e9-c7d4066c62b3-lib-modules\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546670 kubelet[2683]: I1112 22:53:19.546553 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/60d50491-0211-4e93-94e9-c7d4066c62b3-node-certs\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546670 kubelet[2683]: I1112 22:53:19.546606 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr827\" (UniqueName: \"kubernetes.io/projected/60d50491-0211-4e93-94e9-c7d4066c62b3-kube-api-access-lr827\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546670 kubelet[2683]: I1112 22:53:19.546647 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60d50491-0211-4e93-94e9-c7d4066c62b3-tigera-ca-bundle\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546807 kubelet[2683]: I1112 22:53:19.546679 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/60d50491-0211-4e93-94e9-c7d4066c62b3-cni-bin-dir\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546807 kubelet[2683]: I1112 22:53:19.546740 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/60d50491-0211-4e93-94e9-c7d4066c62b3-cni-net-dir\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546807 kubelet[2683]: I1112 22:53:19.546783 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60d50491-0211-4e93-94e9-c7d4066c62b3-var-lib-calico\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.546807 kubelet[2683]: I1112 22:53:19.546804 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/60d50491-0211-4e93-94e9-c7d4066c62b3-flexvol-driver-host\") pod \"calico-node-zws85\" (UID: \"60d50491-0211-4e93-94e9-c7d4066c62b3\") " pod="calico-system/calico-node-zws85" Nov 12 22:53:19.572019 kubelet[2683]: E1112 22:53:19.571991 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:19.572523 containerd[1458]: time="2024-11-12T22:53:19.572486923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-984fdf767-jwvm9,Uid:966c8acf-bc1d-4b97-8f63-ebd60ade7023,Namespace:calico-system,Attempt:0,}" Nov 12 22:53:19.651350 kubelet[2683]: E1112 22:53:19.651268 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.651350 kubelet[2683]: W1112 22:53:19.651291 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.651350 kubelet[2683]: E1112 22:53:19.651312 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.651665 kubelet[2683]: E1112 22:53:19.651629 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.651665 kubelet[2683]: W1112 22:53:19.651642 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.651665 kubelet[2683]: E1112 22:53:19.651654 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.734553 kubelet[2683]: I1112 22:53:19.734159 2683 topology_manager.go:215] "Topology Admit Handler" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" podNamespace="calico-system" podName="csi-node-driver-ghdrg" Nov 12 22:53:19.734553 kubelet[2683]: E1112 22:53:19.734401 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:19.741351 kubelet[2683]: E1112 22:53:19.741319 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.741536 kubelet[2683]: W1112 22:53:19.741481 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.741536 kubelet[2683]: E1112 22:53:19.741505 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.747794 containerd[1458]: time="2024-11-12T22:53:19.747679983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:19.747794 containerd[1458]: time="2024-11-12T22:53:19.747739575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:19.747794 containerd[1458]: time="2024-11-12T22:53:19.747751337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:19.747995 containerd[1458]: time="2024-11-12T22:53:19.747827289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:19.770256 systemd[1]: Started cri-containerd-00a7dd71b81c139e93783de76ddfe5d9c209ab47a8e26c89f42a74ae4467bd88.scope - libcontainer container 00a7dd71b81c139e93783de76ddfe5d9c209ab47a8e26c89f42a74ae4467bd88. Nov 12 22:53:19.794238 kubelet[2683]: E1112 22:53:19.794178 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:19.794694 containerd[1458]: time="2024-11-12T22:53:19.794649423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zws85,Uid:60d50491-0211-4e93-94e9-c7d4066c62b3,Namespace:calico-system,Attempt:0,}" Nov 12 22:53:19.804912 containerd[1458]: time="2024-11-12T22:53:19.804279839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-984fdf767-jwvm9,Uid:966c8acf-bc1d-4b97-8f63-ebd60ade7023,Namespace:calico-system,Attempt:0,} returns sandbox id \"00a7dd71b81c139e93783de76ddfe5d9c209ab47a8e26c89f42a74ae4467bd88\"" Nov 12 22:53:19.805114 kubelet[2683]: E1112 22:53:19.804725 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:19.807987 containerd[1458]: time="2024-11-12T22:53:19.805739581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 22:53:19.836601 kubelet[2683]: E1112 22:53:19.836561 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.836601 kubelet[2683]: W1112 22:53:19.836585 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.836601 kubelet[2683]: E1112 22:53:19.836608 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.836818 kubelet[2683]: E1112 22:53:19.836798 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.836818 kubelet[2683]: W1112 22:53:19.836809 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.836818 kubelet[2683]: E1112 22:53:19.836820 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.837001 kubelet[2683]: E1112 22:53:19.836981 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.837001 kubelet[2683]: W1112 22:53:19.836992 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.837001 kubelet[2683]: E1112 22:53:19.837002 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.837207 kubelet[2683]: E1112 22:53:19.837192 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.837247 kubelet[2683]: W1112 22:53:19.837204 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.837247 kubelet[2683]: E1112 22:53:19.837226 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.837432 kubelet[2683]: E1112 22:53:19.837418 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.837432 kubelet[2683]: W1112 22:53:19.837430 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.837484 kubelet[2683]: E1112 22:53:19.837443 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.837683 kubelet[2683]: E1112 22:53:19.837668 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.837683 kubelet[2683]: W1112 22:53:19.837681 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.837739 kubelet[2683]: E1112 22:53:19.837696 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.837878 kubelet[2683]: E1112 22:53:19.837865 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.837878 kubelet[2683]: W1112 22:53:19.837875 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.837922 kubelet[2683]: E1112 22:53:19.837884 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.838177 kubelet[2683]: E1112 22:53:19.838161 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.838177 kubelet[2683]: W1112 22:53:19.838173 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.838239 kubelet[2683]: E1112 22:53:19.838186 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.838367 kubelet[2683]: E1112 22:53:19.838353 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.838367 kubelet[2683]: W1112 22:53:19.838363 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.838417 kubelet[2683]: E1112 22:53:19.838372 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.838550 kubelet[2683]: E1112 22:53:19.838537 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.838550 kubelet[2683]: W1112 22:53:19.838547 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.838616 kubelet[2683]: E1112 22:53:19.838556 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.838737 kubelet[2683]: E1112 22:53:19.838716 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.838737 kubelet[2683]: W1112 22:53:19.838727 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.838737 kubelet[2683]: E1112 22:53:19.838736 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.838908 kubelet[2683]: E1112 22:53:19.838893 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.838908 kubelet[2683]: W1112 22:53:19.838899 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.838960 kubelet[2683]: E1112 22:53:19.838910 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.839083 kubelet[2683]: E1112 22:53:19.839065 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.839083 kubelet[2683]: W1112 22:53:19.839074 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.839144 kubelet[2683]: E1112 22:53:19.839084 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.839265 kubelet[2683]: E1112 22:53:19.839251 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.839265 kubelet[2683]: W1112 22:53:19.839260 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.839319 kubelet[2683]: E1112 22:53:19.839269 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.839423 kubelet[2683]: E1112 22:53:19.839408 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.839423 kubelet[2683]: W1112 22:53:19.839417 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.839466 kubelet[2683]: E1112 22:53:19.839426 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.839586 kubelet[2683]: E1112 22:53:19.839572 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.839586 kubelet[2683]: W1112 22:53:19.839581 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.839636 kubelet[2683]: E1112 22:53:19.839591 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.839790 kubelet[2683]: E1112 22:53:19.839776 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.839790 kubelet[2683]: W1112 22:53:19.839785 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.839836 kubelet[2683]: E1112 22:53:19.839794 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.839945 kubelet[2683]: E1112 22:53:19.839935 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.839945 kubelet[2683]: W1112 22:53:19.839943 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.839994 kubelet[2683]: E1112 22:53:19.839952 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.840119 kubelet[2683]: E1112 22:53:19.840109 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.840119 kubelet[2683]: W1112 22:53:19.840117 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.840179 kubelet[2683]: E1112 22:53:19.840141 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.840328 kubelet[2683]: E1112 22:53:19.840318 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.840328 kubelet[2683]: W1112 22:53:19.840325 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.840390 kubelet[2683]: E1112 22:53:19.840335 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.850574 kubelet[2683]: E1112 22:53:19.850556 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.850574 kubelet[2683]: W1112 22:53:19.850566 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.850643 kubelet[2683]: E1112 22:53:19.850578 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.850643 kubelet[2683]: I1112 22:53:19.850604 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzp6g\" (UniqueName: \"kubernetes.io/projected/fd0f5998-8c5a-42b9-a810-034dc8c3ba70-kube-api-access-vzp6g\") pod \"csi-node-driver-ghdrg\" (UID: \"fd0f5998-8c5a-42b9-a810-034dc8c3ba70\") " pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:19.850782 kubelet[2683]: E1112 22:53:19.850767 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.850782 kubelet[2683]: W1112 22:53:19.850780 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.850842 kubelet[2683]: E1112 22:53:19.850795 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.850842 kubelet[2683]: I1112 22:53:19.850813 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd0f5998-8c5a-42b9-a810-034dc8c3ba70-socket-dir\") pod \"csi-node-driver-ghdrg\" (UID: \"fd0f5998-8c5a-42b9-a810-034dc8c3ba70\") " pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:19.851158 kubelet[2683]: E1112 22:53:19.851109 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.851158 kubelet[2683]: W1112 22:53:19.851149 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.851231 kubelet[2683]: E1112 22:53:19.851182 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.851400 kubelet[2683]: E1112 22:53:19.851380 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.851400 kubelet[2683]: W1112 22:53:19.851391 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.851446 kubelet[2683]: E1112 22:53:19.851407 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.851593 kubelet[2683]: E1112 22:53:19.851579 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.851593 kubelet[2683]: W1112 22:53:19.851591 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.851649 kubelet[2683]: E1112 22:53:19.851607 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.851649 kubelet[2683]: I1112 22:53:19.851626 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fd0f5998-8c5a-42b9-a810-034dc8c3ba70-varrun\") pod \"csi-node-driver-ghdrg\" (UID: \"fd0f5998-8c5a-42b9-a810-034dc8c3ba70\") " pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:19.851829 kubelet[2683]: E1112 22:53:19.851815 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.851829 kubelet[2683]: W1112 22:53:19.851826 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.851873 kubelet[2683]: E1112 22:53:19.851841 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.852040 kubelet[2683]: E1112 22:53:19.852026 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.852040 kubelet[2683]: W1112 22:53:19.852035 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.852120 kubelet[2683]: E1112 22:53:19.852051 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.852250 kubelet[2683]: E1112 22:53:19.852239 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.852250 kubelet[2683]: W1112 22:53:19.852249 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.852301 kubelet[2683]: E1112 22:53:19.852263 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.852301 kubelet[2683]: I1112 22:53:19.852280 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd0f5998-8c5a-42b9-a810-034dc8c3ba70-registration-dir\") pod \"csi-node-driver-ghdrg\" (UID: \"fd0f5998-8c5a-42b9-a810-034dc8c3ba70\") " pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:19.852465 kubelet[2683]: E1112 22:53:19.852451 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.852465 kubelet[2683]: W1112 22:53:19.852462 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.852517 kubelet[2683]: E1112 22:53:19.852478 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.852666 kubelet[2683]: E1112 22:53:19.852655 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.852707 kubelet[2683]: W1112 22:53:19.852665 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.852707 kubelet[2683]: E1112 22:53:19.852681 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.852864 kubelet[2683]: E1112 22:53:19.852853 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.852864 kubelet[2683]: W1112 22:53:19.852862 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.852906 kubelet[2683]: E1112 22:53:19.852875 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.852906 kubelet[2683]: I1112 22:53:19.852892 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd0f5998-8c5a-42b9-a810-034dc8c3ba70-kubelet-dir\") pod \"csi-node-driver-ghdrg\" (UID: \"fd0f5998-8c5a-42b9-a810-034dc8c3ba70\") " pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:19.853104 kubelet[2683]: E1112 22:53:19.853088 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.853104 kubelet[2683]: W1112 22:53:19.853102 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.853160 kubelet[2683]: E1112 22:53:19.853120 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.853331 kubelet[2683]: E1112 22:53:19.853314 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.853331 kubelet[2683]: W1112 22:53:19.853327 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.853394 kubelet[2683]: E1112 22:53:19.853344 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.853515 kubelet[2683]: E1112 22:53:19.853505 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.853515 kubelet[2683]: W1112 22:53:19.853513 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.853554 kubelet[2683]: E1112 22:53:19.853522 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.853690 kubelet[2683]: E1112 22:53:19.853680 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.853690 kubelet[2683]: W1112 22:53:19.853688 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.853744 kubelet[2683]: E1112 22:53:19.853698 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.954061 kubelet[2683]: E1112 22:53:19.954022 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.954061 kubelet[2683]: W1112 22:53:19.954043 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.954061 kubelet[2683]: E1112 22:53:19.954067 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.954344 kubelet[2683]: E1112 22:53:19.954325 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.954344 kubelet[2683]: W1112 22:53:19.954336 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.954469 kubelet[2683]: E1112 22:53:19.954352 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.954605 kubelet[2683]: E1112 22:53:19.954586 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.954631 kubelet[2683]: W1112 22:53:19.954604 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.954657 kubelet[2683]: E1112 22:53:19.954630 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.954796 kubelet[2683]: E1112 22:53:19.954786 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.954796 kubelet[2683]: W1112 22:53:19.954794 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.954844 kubelet[2683]: E1112 22:53:19.954808 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.954987 kubelet[2683]: E1112 22:53:19.954976 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.954987 kubelet[2683]: W1112 22:53:19.954986 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.955049 kubelet[2683]: E1112 22:53:19.954998 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.955216 kubelet[2683]: E1112 22:53:19.955205 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.955216 kubelet[2683]: W1112 22:53:19.955214 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.955273 kubelet[2683]: E1112 22:53:19.955233 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.955481 kubelet[2683]: E1112 22:53:19.955466 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.955481 kubelet[2683]: W1112 22:53:19.955479 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.955548 kubelet[2683]: E1112 22:53:19.955494 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.955717 kubelet[2683]: E1112 22:53:19.955704 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.955717 kubelet[2683]: W1112 22:53:19.955713 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.955775 kubelet[2683]: E1112 22:53:19.955753 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.955934 kubelet[2683]: E1112 22:53:19.955920 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.955934 kubelet[2683]: W1112 22:53:19.955930 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.956006 kubelet[2683]: E1112 22:53:19.955972 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.956165 kubelet[2683]: E1112 22:53:19.956148 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.956199 kubelet[2683]: W1112 22:53:19.956164 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.956199 kubelet[2683]: E1112 22:53:19.956185 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.956444 kubelet[2683]: E1112 22:53:19.956432 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.956444 kubelet[2683]: W1112 22:53:19.956441 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.956515 kubelet[2683]: E1112 22:53:19.956454 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.956618 kubelet[2683]: E1112 22:53:19.956608 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.956618 kubelet[2683]: W1112 22:53:19.956616 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.956680 kubelet[2683]: E1112 22:53:19.956628 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.956849 kubelet[2683]: E1112 22:53:19.956830 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.956849 kubelet[2683]: W1112 22:53:19.956839 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.956902 kubelet[2683]: E1112 22:53:19.956853 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.957053 kubelet[2683]: E1112 22:53:19.957041 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.957053 kubelet[2683]: W1112 22:53:19.957050 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.957115 kubelet[2683]: E1112 22:53:19.957063 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.957283 kubelet[2683]: E1112 22:53:19.957270 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.957283 kubelet[2683]: W1112 22:53:19.957278 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.957341 kubelet[2683]: E1112 22:53:19.957305 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.957468 kubelet[2683]: E1112 22:53:19.957457 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.957468 kubelet[2683]: W1112 22:53:19.957465 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.957517 kubelet[2683]: E1112 22:53:19.957494 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.957654 kubelet[2683]: E1112 22:53:19.957643 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.957654 kubelet[2683]: W1112 22:53:19.957652 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.957815 kubelet[2683]: E1112 22:53:19.957678 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.957841 kubelet[2683]: E1112 22:53:19.957819 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.957841 kubelet[2683]: W1112 22:53:19.957826 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.957841 kubelet[2683]: E1112 22:53:19.957840 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.958024 kubelet[2683]: E1112 22:53:19.958005 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.958024 kubelet[2683]: W1112 22:53:19.958021 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.958076 kubelet[2683]: E1112 22:53:19.958040 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.958269 kubelet[2683]: E1112 22:53:19.958259 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.958269 kubelet[2683]: W1112 22:53:19.958268 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.958334 kubelet[2683]: E1112 22:53:19.958281 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.958506 kubelet[2683]: E1112 22:53:19.958490 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.958506 kubelet[2683]: W1112 22:53:19.958502 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.958569 kubelet[2683]: E1112 22:53:19.958519 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.958706 kubelet[2683]: E1112 22:53:19.958694 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.958706 kubelet[2683]: W1112 22:53:19.958704 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.958766 kubelet[2683]: E1112 22:53:19.958721 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.958947 kubelet[2683]: E1112 22:53:19.958936 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.958947 kubelet[2683]: W1112 22:53:19.958945 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.959002 kubelet[2683]: E1112 22:53:19.958975 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.959141 kubelet[2683]: E1112 22:53:19.959120 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.959141 kubelet[2683]: W1112 22:53:19.959138 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.959204 kubelet[2683]: E1112 22:53:19.959148 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:19.959554 kubelet[2683]: E1112 22:53:19.959537 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:19.959554 kubelet[2683]: W1112 22:53:19.959546 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:19.959554 kubelet[2683]: E1112 22:53:19.959556 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:20.013600 containerd[1458]: time="2024-11-12T22:53:20.013525178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:20.013600 containerd[1458]: time="2024-11-12T22:53:20.013569903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:20.013600 containerd[1458]: time="2024-11-12T22:53:20.013579521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:20.013774 containerd[1458]: time="2024-11-12T22:53:20.013643711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:20.038305 systemd[1]: Started cri-containerd-cf334eafbb3a34e6d64ffd2b36ab60ab2917faae737b6e458da57652b77df570.scope - libcontainer container cf334eafbb3a34e6d64ffd2b36ab60ab2917faae737b6e458da57652b77df570. Nov 12 22:53:20.042178 kubelet[2683]: E1112 22:53:20.042148 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:20.042178 kubelet[2683]: W1112 22:53:20.042169 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:20.042307 kubelet[2683]: E1112 22:53:20.042188 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:20.062970 containerd[1458]: time="2024-11-12T22:53:20.062839044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zws85,Uid:60d50491-0211-4e93-94e9-c7d4066c62b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf334eafbb3a34e6d64ffd2b36ab60ab2917faae737b6e458da57652b77df570\"" Nov 12 22:53:20.063532 kubelet[2683]: E1112 22:53:20.063489 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:21.817733 kubelet[2683]: E1112 22:53:21.817662 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:23.817649 kubelet[2683]: E1112 22:53:23.817597 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:24.566335 containerd[1458]: time="2024-11-12T22:53:24.566277783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:24.585544 containerd[1458]: time="2024-11-12T22:53:24.585502005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 22:53:24.609072 containerd[1458]: time="2024-11-12T22:53:24.609030817Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:24.627468 containerd[1458]: time="2024-11-12T22:53:24.627425511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:24.628192 containerd[1458]: time="2024-11-12T22:53:24.628151033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 4.82237341s" Nov 12 22:53:24.628192 containerd[1458]: time="2024-11-12T22:53:24.628184977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 22:53:24.635528 containerd[1458]: time="2024-11-12T22:53:24.635500862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 22:53:24.695837 containerd[1458]: time="2024-11-12T22:53:24.695786652Z" level=info msg="CreateContainer within sandbox \"00a7dd71b81c139e93783de76ddfe5d9c209ab47a8e26c89f42a74ae4467bd88\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 22:53:25.060843 containerd[1458]: time="2024-11-12T22:53:25.060791636Z" level=info msg="CreateContainer within sandbox \"00a7dd71b81c139e93783de76ddfe5d9c209ab47a8e26c89f42a74ae4467bd88\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cff08e59033ec6ea71e5d7c94bace20fb0e02cd337ecb5ed3c57f4155000e08f\"" Nov 12 22:53:25.061234 containerd[1458]: time="2024-11-12T22:53:25.061197417Z" level=info msg="StartContainer for \"cff08e59033ec6ea71e5d7c94bace20fb0e02cd337ecb5ed3c57f4155000e08f\"" Nov 12 22:53:25.085263 systemd[1]: Started cri-containerd-cff08e59033ec6ea71e5d7c94bace20fb0e02cd337ecb5ed3c57f4155000e08f.scope - libcontainer container cff08e59033ec6ea71e5d7c94bace20fb0e02cd337ecb5ed3c57f4155000e08f. Nov 12 22:53:25.301309 containerd[1458]: time="2024-11-12T22:53:25.301259123Z" level=info msg="StartContainer for \"cff08e59033ec6ea71e5d7c94bace20fb0e02cd337ecb5ed3c57f4155000e08f\" returns successfully" Nov 12 22:53:25.817284 kubelet[2683]: E1112 22:53:25.817255 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:25.951034 kubelet[2683]: E1112 22:53:25.950993 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:25.984032 kubelet[2683]: E1112 22:53:25.984000 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.984032 kubelet[2683]: W1112 22:53:25.984018 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.984032 kubelet[2683]: E1112 22:53:25.984037 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.984268 kubelet[2683]: E1112 22:53:25.984244 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.984268 kubelet[2683]: W1112 22:53:25.984255 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.984268 kubelet[2683]: E1112 22:53:25.984269 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.984482 kubelet[2683]: E1112 22:53:25.984453 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.984482 kubelet[2683]: W1112 22:53:25.984464 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.984482 kubelet[2683]: E1112 22:53:25.984477 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.984689 kubelet[2683]: E1112 22:53:25.984659 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.984689 kubelet[2683]: W1112 22:53:25.984671 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.984689 kubelet[2683]: E1112 22:53:25.984684 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.984890 kubelet[2683]: E1112 22:53:25.984868 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.984890 kubelet[2683]: W1112 22:53:25.984879 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.984890 kubelet[2683]: E1112 22:53:25.984892 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.985081 kubelet[2683]: E1112 22:53:25.985060 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.985081 kubelet[2683]: W1112 22:53:25.985070 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.985081 kubelet[2683]: E1112 22:53:25.985082 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.985300 kubelet[2683]: E1112 22:53:25.985271 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.985300 kubelet[2683]: W1112 22:53:25.985280 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.985300 kubelet[2683]: E1112 22:53:25.985293 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.985482 kubelet[2683]: E1112 22:53:25.985460 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.985482 kubelet[2683]: W1112 22:53:25.985470 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.985482 kubelet[2683]: E1112 22:53:25.985483 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.985686 kubelet[2683]: E1112 22:53:25.985665 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.985686 kubelet[2683]: W1112 22:53:25.985676 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.985771 kubelet[2683]: E1112 22:53:25.985689 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.985888 kubelet[2683]: E1112 22:53:25.985866 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.985888 kubelet[2683]: W1112 22:53:25.985876 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.985888 kubelet[2683]: E1112 22:53:25.985889 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.986079 kubelet[2683]: E1112 22:53:25.986058 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.986079 kubelet[2683]: W1112 22:53:25.986068 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.986079 kubelet[2683]: E1112 22:53:25.986080 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.986299 kubelet[2683]: E1112 22:53:25.986278 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.986299 kubelet[2683]: W1112 22:53:25.986289 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.986299 kubelet[2683]: E1112 22:53:25.986301 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.986492 kubelet[2683]: E1112 22:53:25.986471 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.986492 kubelet[2683]: W1112 22:53:25.986481 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.986492 kubelet[2683]: E1112 22:53:25.986493 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.986678 kubelet[2683]: E1112 22:53:25.986657 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.986678 kubelet[2683]: W1112 22:53:25.986667 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.986678 kubelet[2683]: E1112 22:53:25.986680 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:25.986872 kubelet[2683]: E1112 22:53:25.986844 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:25.986872 kubelet[2683]: W1112 22:53:25.986861 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:25.986950 kubelet[2683]: E1112 22:53:25.986875 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.126974 kubelet[2683]: E1112 22:53:26.126867 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.126974 kubelet[2683]: W1112 22:53:26.126885 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.126974 kubelet[2683]: E1112 22:53:26.126904 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.127171 kubelet[2683]: E1112 22:53:26.127157 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.127210 kubelet[2683]: W1112 22:53:26.127171 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.127210 kubelet[2683]: E1112 22:53:26.127196 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.127418 kubelet[2683]: E1112 22:53:26.127399 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.127418 kubelet[2683]: W1112 22:53:26.127413 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.127472 kubelet[2683]: E1112 22:53:26.127426 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.127789 kubelet[2683]: E1112 22:53:26.127775 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.127831 kubelet[2683]: W1112 22:53:26.127790 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.127831 kubelet[2683]: E1112 22:53:26.127810 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.128396 kubelet[2683]: E1112 22:53:26.128366 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.128396 kubelet[2683]: W1112 22:53:26.128382 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.128396 kubelet[2683]: E1112 22:53:26.128401 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.128752 kubelet[2683]: E1112 22:53:26.128721 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.128867 kubelet[2683]: W1112 22:53:26.128749 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.128867 kubelet[2683]: E1112 22:53:26.128821 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.129053 kubelet[2683]: E1112 22:53:26.129030 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.129053 kubelet[2683]: W1112 22:53:26.129043 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.129107 kubelet[2683]: E1112 22:53:26.129086 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.129310 kubelet[2683]: E1112 22:53:26.129296 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.129310 kubelet[2683]: W1112 22:53:26.129308 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.129364 kubelet[2683]: E1112 22:53:26.129344 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.129529 kubelet[2683]: E1112 22:53:26.129514 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.129529 kubelet[2683]: W1112 22:53:26.129524 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.129582 kubelet[2683]: E1112 22:53:26.129549 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.129835 kubelet[2683]: E1112 22:53:26.129816 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.129835 kubelet[2683]: W1112 22:53:26.129829 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.129893 kubelet[2683]: E1112 22:53:26.129858 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.130081 kubelet[2683]: E1112 22:53:26.130065 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.130081 kubelet[2683]: W1112 22:53:26.130077 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.130161 kubelet[2683]: E1112 22:53:26.130095 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.130333 kubelet[2683]: E1112 22:53:26.130320 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.130333 kubelet[2683]: W1112 22:53:26.130330 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.130389 kubelet[2683]: E1112 22:53:26.130348 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.130652 kubelet[2683]: E1112 22:53:26.130629 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.130652 kubelet[2683]: W1112 22:53:26.130642 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.130700 kubelet[2683]: E1112 22:53:26.130659 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.130905 kubelet[2683]: E1112 22:53:26.130885 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.130905 kubelet[2683]: W1112 22:53:26.130896 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.130958 kubelet[2683]: E1112 22:53:26.130910 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.131151 kubelet[2683]: E1112 22:53:26.131138 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.131151 kubelet[2683]: W1112 22:53:26.131150 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.131201 kubelet[2683]: E1112 22:53:26.131186 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.131429 kubelet[2683]: E1112 22:53:26.131405 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.131429 kubelet[2683]: W1112 22:53:26.131425 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.131487 kubelet[2683]: E1112 22:53:26.131469 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.131980 kubelet[2683]: E1112 22:53:26.131959 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.131980 kubelet[2683]: W1112 22:53:26.131972 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.132155 kubelet[2683]: E1112 22:53:26.132018 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.132277 kubelet[2683]: E1112 22:53:26.132259 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.132277 kubelet[2683]: W1112 22:53:26.132270 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.132277 kubelet[2683]: E1112 22:53:26.132281 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.951844 kubelet[2683]: I1112 22:53:26.951764 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:53:26.952404 kubelet[2683]: E1112 22:53:26.952384 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:26.993716 kubelet[2683]: E1112 22:53:26.993688 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.993716 kubelet[2683]: W1112 22:53:26.993706 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.993816 kubelet[2683]: E1112 22:53:26.993727 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.993991 kubelet[2683]: E1112 22:53:26.993973 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.993991 kubelet[2683]: W1112 22:53:26.993986 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.994076 kubelet[2683]: E1112 22:53:26.993999 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.994251 kubelet[2683]: E1112 22:53:26.994236 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.994251 kubelet[2683]: W1112 22:53:26.994248 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.994365 kubelet[2683]: E1112 22:53:26.994261 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.994492 kubelet[2683]: E1112 22:53:26.994476 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.994492 kubelet[2683]: W1112 22:53:26.994488 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.994565 kubelet[2683]: E1112 22:53:26.994501 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.994726 kubelet[2683]: E1112 22:53:26.994712 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.994726 kubelet[2683]: W1112 22:53:26.994722 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.994806 kubelet[2683]: E1112 22:53:26.994735 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.994976 kubelet[2683]: E1112 22:53:26.994962 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.994976 kubelet[2683]: W1112 22:53:26.994972 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.995053 kubelet[2683]: E1112 22:53:26.994985 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.995200 kubelet[2683]: E1112 22:53:26.995185 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.995200 kubelet[2683]: W1112 22:53:26.995196 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.995289 kubelet[2683]: E1112 22:53:26.995209 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.995424 kubelet[2683]: E1112 22:53:26.995410 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.995424 kubelet[2683]: W1112 22:53:26.995421 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.995494 kubelet[2683]: E1112 22:53:26.995436 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.995648 kubelet[2683]: E1112 22:53:26.995635 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.995648 kubelet[2683]: W1112 22:53:26.995645 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.995727 kubelet[2683]: E1112 22:53:26.995657 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.995876 kubelet[2683]: E1112 22:53:26.995862 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.995876 kubelet[2683]: W1112 22:53:26.995874 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.995949 kubelet[2683]: E1112 22:53:26.995887 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.996108 kubelet[2683]: E1112 22:53:26.996094 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.996108 kubelet[2683]: W1112 22:53:26.996104 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.996211 kubelet[2683]: E1112 22:53:26.996116 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.996343 kubelet[2683]: E1112 22:53:26.996329 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.996343 kubelet[2683]: W1112 22:53:26.996340 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.996414 kubelet[2683]: E1112 22:53:26.996352 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.996561 kubelet[2683]: E1112 22:53:26.996547 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.996561 kubelet[2683]: W1112 22:53:26.996557 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.996650 kubelet[2683]: E1112 22:53:26.996569 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.996796 kubelet[2683]: E1112 22:53:26.996783 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.996796 kubelet[2683]: W1112 22:53:26.996793 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.996887 kubelet[2683]: E1112 22:53:26.996805 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:26.997015 kubelet[2683]: E1112 22:53:26.997001 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:26.997015 kubelet[2683]: W1112 22:53:26.997011 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:26.997087 kubelet[2683]: E1112 22:53:26.997023 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.033376 kubelet[2683]: E1112 22:53:27.033339 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.033376 kubelet[2683]: W1112 22:53:27.033366 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.033376 kubelet[2683]: E1112 22:53:27.033381 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.033604 kubelet[2683]: E1112 22:53:27.033591 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.033604 kubelet[2683]: W1112 22:53:27.033600 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.033660 kubelet[2683]: E1112 22:53:27.033616 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.033839 kubelet[2683]: E1112 22:53:27.033805 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.033839 kubelet[2683]: W1112 22:53:27.033818 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.033901 kubelet[2683]: E1112 22:53:27.033844 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.034069 kubelet[2683]: E1112 22:53:27.034051 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.034069 kubelet[2683]: W1112 22:53:27.034061 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.034155 kubelet[2683]: E1112 22:53:27.034075 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.034299 kubelet[2683]: E1112 22:53:27.034288 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.034299 kubelet[2683]: W1112 22:53:27.034296 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.034343 kubelet[2683]: E1112 22:53:27.034310 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.034615 kubelet[2683]: E1112 22:53:27.034581 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.034615 kubelet[2683]: W1112 22:53:27.034609 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.034679 kubelet[2683]: E1112 22:53:27.034640 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.034872 kubelet[2683]: E1112 22:53:27.034858 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.034872 kubelet[2683]: W1112 22:53:27.034868 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.034921 kubelet[2683]: E1112 22:53:27.034883 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.035101 kubelet[2683]: E1112 22:53:27.035087 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.035139 kubelet[2683]: W1112 22:53:27.035100 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.035139 kubelet[2683]: E1112 22:53:27.035115 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.035326 kubelet[2683]: E1112 22:53:27.035314 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.035326 kubelet[2683]: W1112 22:53:27.035324 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.035367 kubelet[2683]: E1112 22:53:27.035341 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.035515 kubelet[2683]: E1112 22:53:27.035504 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.035515 kubelet[2683]: W1112 22:53:27.035512 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.035515 kubelet[2683]: E1112 22:53:27.035524 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.035706 kubelet[2683]: E1112 22:53:27.035695 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.035706 kubelet[2683]: W1112 22:53:27.035704 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.035754 kubelet[2683]: E1112 22:53:27.035717 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.035916 kubelet[2683]: E1112 22:53:27.035905 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.035916 kubelet[2683]: W1112 22:53:27.035914 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.035965 kubelet[2683]: E1112 22:53:27.035927 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.036136 kubelet[2683]: E1112 22:53:27.036110 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.036136 kubelet[2683]: W1112 22:53:27.036121 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.036190 kubelet[2683]: E1112 22:53:27.036143 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.036329 kubelet[2683]: E1112 22:53:27.036317 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.036329 kubelet[2683]: W1112 22:53:27.036326 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.036399 kubelet[2683]: E1112 22:53:27.036339 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.036525 kubelet[2683]: E1112 22:53:27.036514 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.036525 kubelet[2683]: W1112 22:53:27.036523 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.036564 kubelet[2683]: E1112 22:53:27.036534 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.036700 kubelet[2683]: E1112 22:53:27.036688 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.036700 kubelet[2683]: W1112 22:53:27.036696 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.036758 kubelet[2683]: E1112 22:53:27.036705 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.036905 kubelet[2683]: E1112 22:53:27.036895 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.036905 kubelet[2683]: W1112 22:53:27.036903 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.036950 kubelet[2683]: E1112 22:53:27.036912 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.037224 kubelet[2683]: E1112 22:53:27.037211 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:53:27.037224 kubelet[2683]: W1112 22:53:27.037220 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:53:27.037291 kubelet[2683]: E1112 22:53:27.037231 2683 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:53:27.817322 kubelet[2683]: E1112 22:53:27.817277 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:28.174862 containerd[1458]: time="2024-11-12T22:53:28.174654335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:28.215068 containerd[1458]: time="2024-11-12T22:53:28.215017137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 22:53:28.230538 containerd[1458]: time="2024-11-12T22:53:28.230500219Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:28.260650 containerd[1458]: time="2024-11-12T22:53:28.260596170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:28.261237 containerd[1458]: time="2024-11-12T22:53:28.261202368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 3.625674375s" Nov 12 22:53:28.261237 containerd[1458]: time="2024-11-12T22:53:28.261232053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 22:53:28.266172 containerd[1458]: time="2024-11-12T22:53:28.266115108Z" level=info msg="CreateContainer within sandbox \"cf334eafbb3a34e6d64ffd2b36ab60ab2917faae737b6e458da57652b77df570\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 22:53:28.540282 containerd[1458]: time="2024-11-12T22:53:28.540235466Z" level=info msg="CreateContainer within sandbox \"cf334eafbb3a34e6d64ffd2b36ab60ab2917faae737b6e458da57652b77df570\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"393b448a9fd0938280da5085b7c6157ee679e46d9af6c00c1ec748f1706150ac\"" Nov 12 22:53:28.540960 containerd[1458]: time="2024-11-12T22:53:28.540793554Z" level=info msg="StartContainer for \"393b448a9fd0938280da5085b7c6157ee679e46d9af6c00c1ec748f1706150ac\"" Nov 12 22:53:28.573058 systemd[1]: Started cri-containerd-393b448a9fd0938280da5085b7c6157ee679e46d9af6c00c1ec748f1706150ac.scope - libcontainer container 393b448a9fd0938280da5085b7c6157ee679e46d9af6c00c1ec748f1706150ac. Nov 12 22:53:28.634721 systemd[1]: cri-containerd-393b448a9fd0938280da5085b7c6157ee679e46d9af6c00c1ec748f1706150ac.scope: Deactivated successfully. Nov 12 22:53:28.843278 containerd[1458]: time="2024-11-12T22:53:28.842955045Z" level=info msg="StartContainer for \"393b448a9fd0938280da5085b7c6157ee679e46d9af6c00c1ec748f1706150ac\" returns successfully" Nov 12 22:53:28.867254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-393b448a9fd0938280da5085b7c6157ee679e46d9af6c00c1ec748f1706150ac-rootfs.mount: Deactivated successfully. Nov 12 22:53:28.879955 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:34864.service - OpenSSH per-connection server daemon (10.0.0.1:34864). Nov 12 22:53:28.942181 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 34864 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:53:28.943663 sshd-session[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:28.956518 kubelet[2683]: E1112 22:53:28.956495 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:29.351637 kubelet[2683]: I1112 22:53:29.351596 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-984fdf767-jwvm9" podStartSLOduration=5.521284739 podStartE2EDuration="10.351546931s" podCreationTimestamp="2024-11-12 22:53:19 +0000 UTC" firstStartedPulling="2024-11-12 22:53:19.805120388 +0000 UTC m=+23.068843942" lastFinishedPulling="2024-11-12 22:53:24.63538258 +0000 UTC m=+27.899106134" observedRunningTime="2024-11-12 22:53:26.511565241 +0000 UTC m=+29.775288795" watchObservedRunningTime="2024-11-12 22:53:29.351546931 +0000 UTC m=+32.615270485" Nov 12 22:53:29.352738 systemd-logind[1437]: New session 10 of user core. Nov 12 22:53:29.365255 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 22:53:29.412646 containerd[1458]: time="2024-11-12T22:53:29.412586067Z" level=info msg="shim disconnected" id=393b448a9fd0938280da5085b7c6157ee679e46d9af6c00c1ec748f1706150ac namespace=k8s.io Nov 12 22:53:29.412646 containerd[1458]: time="2024-11-12T22:53:29.412639387Z" level=warning msg="cleaning up after shim disconnected" id=393b448a9fd0938280da5085b7c6157ee679e46d9af6c00c1ec748f1706150ac namespace=k8s.io Nov 12 22:53:29.412646 containerd[1458]: time="2024-11-12T22:53:29.412650237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:53:29.484734 sshd[3418]: Connection closed by 10.0.0.1 port 34864 Nov 12 22:53:29.485081 sshd-session[3410]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:29.488721 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:34864.service: Deactivated successfully. Nov 12 22:53:29.490585 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 22:53:29.491240 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Nov 12 22:53:29.492075 systemd-logind[1437]: Removed session 10. Nov 12 22:53:29.816719 kubelet[2683]: E1112 22:53:29.816679 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:29.958880 kubelet[2683]: E1112 22:53:29.958850 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:29.959623 containerd[1458]: time="2024-11-12T22:53:29.959569237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 22:53:31.817277 kubelet[2683]: E1112 22:53:31.817220 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:33.817041 kubelet[2683]: E1112 22:53:33.816984 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:34.510803 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:34878.service - OpenSSH per-connection server daemon (10.0.0.1:34878). Nov 12 22:53:34.553253 sshd[3449]: Accepted publickey for core from 10.0.0.1 port 34878 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:53:34.554894 sshd-session[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:34.560068 systemd-logind[1437]: New session 11 of user core. Nov 12 22:53:34.564287 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 22:53:34.732015 sshd[3451]: Connection closed by 10.0.0.1 port 34878 Nov 12 22:53:34.732384 sshd-session[3449]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:34.736117 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:34878.service: Deactivated successfully. Nov 12 22:53:34.738845 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 22:53:34.739737 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Nov 12 22:53:34.740951 systemd-logind[1437]: Removed session 11. Nov 12 22:53:34.952566 containerd[1458]: time="2024-11-12T22:53:34.952527126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:34.965668 containerd[1458]: time="2024-11-12T22:53:34.965608401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 22:53:34.985078 containerd[1458]: time="2024-11-12T22:53:34.985033726Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:35.009350 containerd[1458]: time="2024-11-12T22:53:35.009306571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:35.009879 containerd[1458]: time="2024-11-12T22:53:35.009849055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 5.050230834s" Nov 12 22:53:35.009879 containerd[1458]: time="2024-11-12T22:53:35.009872961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 22:53:35.011305 containerd[1458]: time="2024-11-12T22:53:35.011275414Z" level=info msg="CreateContainer within sandbox \"cf334eafbb3a34e6d64ffd2b36ab60ab2917faae737b6e458da57652b77df570\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 22:53:35.259189 containerd[1458]: time="2024-11-12T22:53:35.259079220Z" level=info msg="CreateContainer within sandbox \"cf334eafbb3a34e6d64ffd2b36ab60ab2917faae737b6e458da57652b77df570\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d068e867f1a5479ba267c7f59a42a6f0aa12edfdb34c324ed7f8cc1b598ab92a\"" Nov 12 22:53:35.259578 containerd[1458]: time="2024-11-12T22:53:35.259548882Z" level=info msg="StartContainer for \"d068e867f1a5479ba267c7f59a42a6f0aa12edfdb34c324ed7f8cc1b598ab92a\"" Nov 12 22:53:35.289247 systemd[1]: Started cri-containerd-d068e867f1a5479ba267c7f59a42a6f0aa12edfdb34c324ed7f8cc1b598ab92a.scope - libcontainer container d068e867f1a5479ba267c7f59a42a6f0aa12edfdb34c324ed7f8cc1b598ab92a. Nov 12 22:53:35.487227 containerd[1458]: time="2024-11-12T22:53:35.487179551Z" level=info msg="StartContainer for \"d068e867f1a5479ba267c7f59a42a6f0aa12edfdb34c324ed7f8cc1b598ab92a\" returns successfully" Nov 12 22:53:35.817619 kubelet[2683]: E1112 22:53:35.817576 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:35.969903 kubelet[2683]: E1112 22:53:35.969870 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:36.971147 kubelet[2683]: E1112 22:53:36.971094 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:37.788290 containerd[1458]: time="2024-11-12T22:53:37.788244559Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:53:37.791145 systemd[1]: cri-containerd-d068e867f1a5479ba267c7f59a42a6f0aa12edfdb34c324ed7f8cc1b598ab92a.scope: Deactivated successfully. Nov 12 22:53:37.811462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d068e867f1a5479ba267c7f59a42a6f0aa12edfdb34c324ed7f8cc1b598ab92a-rootfs.mount: Deactivated successfully. Nov 12 22:53:37.816950 kubelet[2683]: E1112 22:53:37.816911 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:37.879777 kubelet[2683]: I1112 22:53:37.879741 2683 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 22:53:38.323583 kubelet[2683]: I1112 22:53:38.323523 2683 topology_manager.go:215] "Topology Admit Handler" podUID="e5c7450d-f473-4f6c-94c1-660f160a33e6" podNamespace="calico-system" podName="calico-kube-controllers-565bddf9d5-2stsd" Nov 12 22:53:38.329629 systemd[1]: Created slice kubepods-besteffort-pode5c7450d_f473_4f6c_94c1_660f160a33e6.slice - libcontainer container kubepods-besteffort-pode5c7450d_f473_4f6c_94c1_660f160a33e6.slice. Nov 12 22:53:38.412158 kubelet[2683]: I1112 22:53:38.411316 2683 topology_manager.go:215] "Topology Admit Handler" podUID="bf3a8091-a0f4-4679-8ae5-9dfbfe72d592" podNamespace="kube-system" podName="coredns-76f75df574-7msrw" Nov 12 22:53:38.412316 kubelet[2683]: I1112 22:53:38.412291 2683 topology_manager.go:215] "Topology Admit Handler" podUID="f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8" podNamespace="calico-apiserver" podName="calico-apiserver-849946d688-74ts5" Nov 12 22:53:38.412471 kubelet[2683]: I1112 22:53:38.412383 2683 topology_manager.go:215] "Topology Admit Handler" podUID="a9ecc475-91fc-4510-9d46-ca7309730f66" podNamespace="kube-system" podName="coredns-76f75df574-56scx" Nov 12 22:53:38.412527 kubelet[2683]: I1112 22:53:38.412489 2683 topology_manager.go:215] "Topology Admit Handler" podUID="7ca5a75a-2ac5-4580-98c4-4b88103a40c6" podNamespace="calico-apiserver" podName="calico-apiserver-849946d688-mcmw7" Nov 12 22:53:38.413382 kubelet[2683]: I1112 22:53:38.413266 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pz49\" (UniqueName: \"kubernetes.io/projected/e5c7450d-f473-4f6c-94c1-660f160a33e6-kube-api-access-6pz49\") pod \"calico-kube-controllers-565bddf9d5-2stsd\" (UID: \"e5c7450d-f473-4f6c-94c1-660f160a33e6\") " pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" Nov 12 22:53:38.413382 kubelet[2683]: I1112 22:53:38.413306 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5c7450d-f473-4f6c-94c1-660f160a33e6-tigera-ca-bundle\") pod \"calico-kube-controllers-565bddf9d5-2stsd\" (UID: \"e5c7450d-f473-4f6c-94c1-660f160a33e6\") " pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" Nov 12 22:53:38.419637 systemd[1]: Created slice kubepods-besteffort-pod7ca5a75a_2ac5_4580_98c4_4b88103a40c6.slice - libcontainer container kubepods-besteffort-pod7ca5a75a_2ac5_4580_98c4_4b88103a40c6.slice. Nov 12 22:53:38.424698 systemd[1]: Created slice kubepods-besteffort-podf399a5a4_5c83_4cb1_9e30_0bcffdf5c4a8.slice - libcontainer container kubepods-besteffort-podf399a5a4_5c83_4cb1_9e30_0bcffdf5c4a8.slice. Nov 12 22:53:38.430080 systemd[1]: Created slice kubepods-burstable-podbf3a8091_a0f4_4679_8ae5_9dfbfe72d592.slice - libcontainer container kubepods-burstable-podbf3a8091_a0f4_4679_8ae5_9dfbfe72d592.slice. Nov 12 22:53:38.434305 systemd[1]: Created slice kubepods-burstable-poda9ecc475_91fc_4510_9d46_ca7309730f66.slice - libcontainer container kubepods-burstable-poda9ecc475_91fc_4510_9d46_ca7309730f66.slice. Nov 12 22:53:38.514201 kubelet[2683]: I1112 22:53:38.514156 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf3a8091-a0f4-4679-8ae5-9dfbfe72d592-config-volume\") pod \"coredns-76f75df574-7msrw\" (UID: \"bf3a8091-a0f4-4679-8ae5-9dfbfe72d592\") " pod="kube-system/coredns-76f75df574-7msrw" Nov 12 22:53:38.514371 kubelet[2683]: I1112 22:53:38.514342 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ca5a75a-2ac5-4580-98c4-4b88103a40c6-calico-apiserver-certs\") pod \"calico-apiserver-849946d688-mcmw7\" (UID: \"7ca5a75a-2ac5-4580-98c4-4b88103a40c6\") " pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" Nov 12 22:53:38.514401 kubelet[2683]: I1112 22:53:38.514378 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9ecc475-91fc-4510-9d46-ca7309730f66-config-volume\") pod \"coredns-76f75df574-56scx\" (UID: \"a9ecc475-91fc-4510-9d46-ca7309730f66\") " pod="kube-system/coredns-76f75df574-56scx" Nov 12 22:53:38.514401 kubelet[2683]: I1112 22:53:38.514401 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlcs7\" (UniqueName: \"kubernetes.io/projected/bf3a8091-a0f4-4679-8ae5-9dfbfe72d592-kube-api-access-rlcs7\") pod \"coredns-76f75df574-7msrw\" (UID: \"bf3a8091-a0f4-4679-8ae5-9dfbfe72d592\") " pod="kube-system/coredns-76f75df574-7msrw" Nov 12 22:53:38.514451 kubelet[2683]: I1112 22:53:38.514421 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwwb\" (UniqueName: \"kubernetes.io/projected/a9ecc475-91fc-4510-9d46-ca7309730f66-kube-api-access-gqwwb\") pod \"coredns-76f75df574-56scx\" (UID: \"a9ecc475-91fc-4510-9d46-ca7309730f66\") " pod="kube-system/coredns-76f75df574-56scx" Nov 12 22:53:38.514451 kubelet[2683]: I1112 22:53:38.514443 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4687\" (UniqueName: \"kubernetes.io/projected/7ca5a75a-2ac5-4580-98c4-4b88103a40c6-kube-api-access-t4687\") pod \"calico-apiserver-849946d688-mcmw7\" (UID: \"7ca5a75a-2ac5-4580-98c4-4b88103a40c6\") " pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" Nov 12 22:53:38.514588 kubelet[2683]: I1112 22:53:38.514562 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8-calico-apiserver-certs\") pod \"calico-apiserver-849946d688-74ts5\" (UID: \"f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8\") " pod="calico-apiserver/calico-apiserver-849946d688-74ts5" Nov 12 22:53:38.514620 kubelet[2683]: I1112 22:53:38.514589 2683 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqkr5\" (UniqueName: \"kubernetes.io/projected/f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8-kube-api-access-vqkr5\") pod \"calico-apiserver-849946d688-74ts5\" (UID: \"f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8\") " pod="calico-apiserver/calico-apiserver-849946d688-74ts5" Nov 12 22:53:38.629288 containerd[1458]: time="2024-11-12T22:53:38.629117878Z" level=info msg="shim disconnected" id=d068e867f1a5479ba267c7f59a42a6f0aa12edfdb34c324ed7f8cc1b598ab92a namespace=k8s.io Nov 12 22:53:38.629288 containerd[1458]: time="2024-11-12T22:53:38.629200478Z" level=warning msg="cleaning up after shim disconnected" id=d068e867f1a5479ba267c7f59a42a6f0aa12edfdb34c324ed7f8cc1b598ab92a namespace=k8s.io Nov 12 22:53:38.629288 containerd[1458]: time="2024-11-12T22:53:38.629210828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:53:38.732214 kubelet[2683]: E1112 22:53:38.732170 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:38.732985 containerd[1458]: time="2024-11-12T22:53:38.732953324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7msrw,Uid:bf3a8091-a0f4-4679-8ae5-9dfbfe72d592,Namespace:kube-system,Attempt:0,}" Nov 12 22:53:38.733095 containerd[1458]: time="2024-11-12T22:53:38.733058709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-mcmw7,Uid:7ca5a75a-2ac5-4580-98c4-4b88103a40c6,Namespace:calico-apiserver,Attempt:0,}" Nov 12 22:53:38.733503 containerd[1458]: time="2024-11-12T22:53:38.733469304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-74ts5,Uid:f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8,Namespace:calico-apiserver,Attempt:0,}" Nov 12 22:53:38.736268 kubelet[2683]: E1112 22:53:38.736234 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:38.736545 containerd[1458]: time="2024-11-12T22:53:38.736519690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56scx,Uid:a9ecc475-91fc-4510-9d46-ca7309730f66,Namespace:kube-system,Attempt:0,}" Nov 12 22:53:38.933043 containerd[1458]: time="2024-11-12T22:53:38.932920161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565bddf9d5-2stsd,Uid:e5c7450d-f473-4f6c-94c1-660f160a33e6,Namespace:calico-system,Attempt:0,}" Nov 12 22:53:38.975999 kubelet[2683]: E1112 22:53:38.975954 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:38.976651 containerd[1458]: time="2024-11-12T22:53:38.976613180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 22:53:39.750779 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:55870.service - OpenSSH per-connection server daemon (10.0.0.1:55870). Nov 12 22:53:39.808386 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 55870 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:53:39.809851 sshd-session[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:39.813883 systemd-logind[1437]: New session 12 of user core. Nov 12 22:53:39.822287 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 22:53:39.827994 systemd[1]: Created slice kubepods-besteffort-podfd0f5998_8c5a_42b9_a810_034dc8c3ba70.slice - libcontainer container kubepods-besteffort-podfd0f5998_8c5a_42b9_a810_034dc8c3ba70.slice. Nov 12 22:53:39.830455 containerd[1458]: time="2024-11-12T22:53:39.830418895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:0,}" Nov 12 22:53:39.954055 sshd[3542]: Connection closed by 10.0.0.1 port 55870 Nov 12 22:53:39.954433 sshd-session[3540]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:39.958813 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:55870.service: Deactivated successfully. Nov 12 22:53:39.960860 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 22:53:39.961480 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Nov 12 22:53:39.962403 systemd-logind[1437]: Removed session 12. Nov 12 22:53:40.573072 containerd[1458]: time="2024-11-12T22:53:40.573019839Z" level=error msg="Failed to destroy network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.573521 containerd[1458]: time="2024-11-12T22:53:40.573429831Z" level=error msg="encountered an error cleaning up failed sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.573521 containerd[1458]: time="2024-11-12T22:53:40.573484025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7msrw,Uid:bf3a8091-a0f4-4679-8ae5-9dfbfe72d592,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.573812 kubelet[2683]: E1112 22:53:40.573763 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.574284 kubelet[2683]: E1112 22:53:40.573844 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7msrw" Nov 12 22:53:40.574284 kubelet[2683]: E1112 22:53:40.573866 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7msrw" Nov 12 22:53:40.574284 kubelet[2683]: E1112 22:53:40.573928 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7msrw_kube-system(bf3a8091-a0f4-4679-8ae5-9dfbfe72d592)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7msrw_kube-system(bf3a8091-a0f4-4679-8ae5-9dfbfe72d592)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7msrw" podUID="bf3a8091-a0f4-4679-8ae5-9dfbfe72d592" Nov 12 22:53:40.604445 containerd[1458]: time="2024-11-12T22:53:40.604398010Z" level=error msg="Failed to destroy network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.604786 containerd[1458]: time="2024-11-12T22:53:40.604761653Z" level=error msg="encountered an error cleaning up failed sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.604849 containerd[1458]: time="2024-11-12T22:53:40.604832289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-74ts5,Uid:f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.605102 kubelet[2683]: E1112 22:53:40.605063 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.605170 kubelet[2683]: E1112 22:53:40.605139 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" Nov 12 22:53:40.605170 kubelet[2683]: E1112 22:53:40.605163 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" Nov 12 22:53:40.605242 kubelet[2683]: E1112 22:53:40.605223 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849946d688-74ts5_calico-apiserver(f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849946d688-74ts5_calico-apiserver(f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" podUID="f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8" Nov 12 22:53:40.679336 containerd[1458]: time="2024-11-12T22:53:40.679237681Z" level=error msg="Failed to destroy network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.679626 containerd[1458]: time="2024-11-12T22:53:40.679601414Z" level=error msg="encountered an error cleaning up failed sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.679686 containerd[1458]: time="2024-11-12T22:53:40.679656350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-mcmw7,Uid:7ca5a75a-2ac5-4580-98c4-4b88103a40c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.679943 kubelet[2683]: E1112 22:53:40.679916 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.680005 kubelet[2683]: E1112 22:53:40.679973 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" Nov 12 22:53:40.680005 kubelet[2683]: E1112 22:53:40.679995 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" Nov 12 22:53:40.680059 kubelet[2683]: E1112 22:53:40.680052 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849946d688-mcmw7_calico-apiserver(7ca5a75a-2ac5-4580-98c4-4b88103a40c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849946d688-mcmw7_calico-apiserver(7ca5a75a-2ac5-4580-98c4-4b88103a40c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" podUID="7ca5a75a-2ac5-4580-98c4-4b88103a40c6" Nov 12 22:53:40.803110 containerd[1458]: time="2024-11-12T22:53:40.803029712Z" level=error msg="Failed to destroy network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.803557 containerd[1458]: time="2024-11-12T22:53:40.803506674Z" level=error msg="encountered an error cleaning up failed sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.803607 containerd[1458]: time="2024-11-12T22:53:40.803577131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565bddf9d5-2stsd,Uid:e5c7450d-f473-4f6c-94c1-660f160a33e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.803906 kubelet[2683]: E1112 22:53:40.803882 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.803970 kubelet[2683]: E1112 22:53:40.803940 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" Nov 12 22:53:40.803970 kubelet[2683]: E1112 22:53:40.803961 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" Nov 12 22:53:40.804023 kubelet[2683]: E1112 22:53:40.804015 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-565bddf9d5-2stsd_calico-system(e5c7450d-f473-4f6c-94c1-660f160a33e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-565bddf9d5-2stsd_calico-system(e5c7450d-f473-4f6c-94c1-660f160a33e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" podUID="e5c7450d-f473-4f6c-94c1-660f160a33e6" Nov 12 22:53:40.827303 containerd[1458]: time="2024-11-12T22:53:40.827182909Z" level=error msg="Failed to destroy network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.827587 containerd[1458]: time="2024-11-12T22:53:40.827562022Z" level=error msg="encountered an error cleaning up failed sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.827639 containerd[1458]: time="2024-11-12T22:53:40.827619322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56scx,Uid:a9ecc475-91fc-4510-9d46-ca7309730f66,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.827933 kubelet[2683]: E1112 22:53:40.827869 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.827933 kubelet[2683]: E1112 22:53:40.827955 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56scx" Nov 12 22:53:40.828106 kubelet[2683]: E1112 22:53:40.827976 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56scx" Nov 12 22:53:40.828106 kubelet[2683]: E1112 22:53:40.828032 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56scx_kube-system(a9ecc475-91fc-4510-9d46-ca7309730f66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56scx_kube-system(a9ecc475-91fc-4510-9d46-ca7309730f66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56scx" podUID="a9ecc475-91fc-4510-9d46-ca7309730f66" Nov 12 22:53:40.977253 containerd[1458]: time="2024-11-12T22:53:40.977191886Z" level=error msg="Failed to destroy network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.977582 containerd[1458]: time="2024-11-12T22:53:40.977561671Z" level=error msg="encountered an error cleaning up failed sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.977648 containerd[1458]: time="2024-11-12T22:53:40.977614543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.977865 kubelet[2683]: E1112 22:53:40.977833 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:40.977936 kubelet[2683]: E1112 22:53:40.977888 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:40.977936 kubelet[2683]: E1112 22:53:40.977911 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:40.978008 kubelet[2683]: E1112 22:53:40.977961 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghdrg_calico-system(fd0f5998-8c5a-42b9-a810-034dc8c3ba70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghdrg_calico-system(fd0f5998-8c5a-42b9-a810-034dc8c3ba70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:40.979489 kubelet[2683]: I1112 22:53:40.979469 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e" Nov 12 22:53:40.980103 containerd[1458]: time="2024-11-12T22:53:40.980072945Z" level=info msg="StopPodSandbox for \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\"" Nov 12 22:53:40.980182 kubelet[2683]: I1112 22:53:40.980159 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49" Nov 12 22:53:40.980495 containerd[1458]: time="2024-11-12T22:53:40.980474652Z" level=info msg="StopPodSandbox for \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\"" Nov 12 22:53:40.980861 kubelet[2683]: I1112 22:53:40.980844 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536" Nov 12 22:53:40.981136 containerd[1458]: time="2024-11-12T22:53:40.981104138Z" level=info msg="StopPodSandbox for \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\"" Nov 12 22:53:40.981556 kubelet[2683]: I1112 22:53:40.981542 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70" Nov 12 22:53:40.981909 containerd[1458]: time="2024-11-12T22:53:40.981883594Z" level=info msg="StopPodSandbox for \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\"" Nov 12 22:53:41.020292 containerd[1458]: time="2024-11-12T22:53:41.020251457Z" level=info msg="Ensure that sandbox 2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49 in task-service has been cleanup successfully" Nov 12 22:53:41.020582 containerd[1458]: time="2024-11-12T22:53:41.020375217Z" level=info msg="Ensure that sandbox d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70 in task-service has been cleanup successfully" Nov 12 22:53:41.020582 containerd[1458]: time="2024-11-12T22:53:41.020267298Z" level=info msg="Ensure that sandbox a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e in task-service has been cleanup successfully" Nov 12 22:53:41.020582 containerd[1458]: time="2024-11-12T22:53:41.020479538Z" level=info msg="TearDown network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" successfully" Nov 12 22:53:41.020582 containerd[1458]: time="2024-11-12T22:53:41.020502081Z" level=info msg="StopPodSandbox for \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" returns successfully" Nov 12 22:53:41.020582 containerd[1458]: time="2024-11-12T22:53:41.020268260Z" level=info msg="Ensure that sandbox 42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536 in task-service has been cleanup successfully" Nov 12 22:53:41.020897 containerd[1458]: time="2024-11-12T22:53:41.020804806Z" level=info msg="TearDown network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" successfully" Nov 12 22:53:41.020897 containerd[1458]: time="2024-11-12T22:53:41.020836908Z" level=info msg="StopPodSandbox for \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" returns successfully" Nov 12 22:53:41.020897 containerd[1458]: time="2024-11-12T22:53:41.020848250Z" level=info msg="TearDown network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" successfully" Nov 12 22:53:41.020897 containerd[1458]: time="2024-11-12T22:53:41.020863670Z" level=info msg="StopPodSandbox for \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" returns successfully" Nov 12 22:53:41.021041 containerd[1458]: time="2024-11-12T22:53:41.020922434Z" level=info msg="TearDown network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" successfully" Nov 12 22:53:41.021041 containerd[1458]: time="2024-11-12T22:53:41.020932653Z" level=info msg="StopPodSandbox for \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" returns successfully" Nov 12 22:53:41.021148 kubelet[2683]: E1112 22:53:41.021105 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:41.021326 kubelet[2683]: I1112 22:53:41.021284 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d" Nov 12 22:53:41.021841 containerd[1458]: time="2024-11-12T22:53:41.021675659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-mcmw7,Uid:7ca5a75a-2ac5-4580-98c4-4b88103a40c6,Namespace:calico-apiserver,Attempt:1,}" Nov 12 22:53:41.021841 containerd[1458]: time="2024-11-12T22:53:41.021730034Z" level=info msg="StopPodSandbox for \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\"" Nov 12 22:53:41.021956 containerd[1458]: time="2024-11-12T22:53:41.021899841Z" level=info msg="Ensure that sandbox 1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d in task-service has been cleanup successfully" Nov 12 22:53:41.021956 containerd[1458]: time="2024-11-12T22:53:41.021678063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565bddf9d5-2stsd,Uid:e5c7450d-f473-4f6c-94c1-660f160a33e6,Namespace:calico-system,Attempt:1,}" Nov 12 22:53:41.022094 containerd[1458]: time="2024-11-12T22:53:41.021684074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-74ts5,Uid:f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8,Namespace:calico-apiserver,Attempt:1,}" Nov 12 22:53:41.022142 containerd[1458]: time="2024-11-12T22:53:41.022084638Z" level=info msg="TearDown network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" successfully" Nov 12 22:53:41.022142 containerd[1458]: time="2024-11-12T22:53:41.022114476Z" level=info msg="StopPodSandbox for \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" returns successfully" Nov 12 22:53:41.022195 containerd[1458]: time="2024-11-12T22:53:41.021731346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56scx,Uid:a9ecc475-91fc-4510-9d46-ca7309730f66,Namespace:kube-system,Attempt:1,}" Nov 12 22:53:41.022443 kubelet[2683]: I1112 22:53:41.022422 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b" Nov 12 22:53:41.022524 kubelet[2683]: E1112 22:53:41.022460 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:41.022709 containerd[1458]: time="2024-11-12T22:53:41.022650501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7msrw,Uid:bf3a8091-a0f4-4679-8ae5-9dfbfe72d592,Namespace:kube-system,Attempt:1,}" Nov 12 22:53:41.022792 containerd[1458]: time="2024-11-12T22:53:41.022767027Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\"" Nov 12 22:53:41.022985 containerd[1458]: time="2024-11-12T22:53:41.022958376Z" level=info msg="Ensure that sandbox 6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b in task-service has been cleanup successfully" Nov 12 22:53:41.023142 containerd[1458]: time="2024-11-12T22:53:41.023106692Z" level=info msg="TearDown network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" successfully" Nov 12 22:53:41.023142 containerd[1458]: time="2024-11-12T22:53:41.023121711Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" returns successfully" Nov 12 22:53:41.023470 containerd[1458]: time="2024-11-12T22:53:41.023433834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:1,}" Nov 12 22:53:41.302754 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49-shm.mount: Deactivated successfully. Nov 12 22:53:41.302878 systemd[1]: run-netns-cni\x2de4d2ad1a\x2dbeac\x2df963\x2ddfbe\x2dc18a362b9a53.mount: Deactivated successfully. Nov 12 22:53:41.302953 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70-shm.mount: Deactivated successfully. Nov 12 22:53:41.303023 systemd[1]: run-netns-cni\x2d55df6cff\x2d50ed\x2d272b\x2d3bed\x2dc75222325a59.mount: Deactivated successfully. Nov 12 22:53:41.303088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d-shm.mount: Deactivated successfully. Nov 12 22:53:41.384642 kubelet[2683]: I1112 22:53:41.384601 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:53:41.385206 kubelet[2683]: E1112 22:53:41.385192 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:42.024540 kubelet[2683]: E1112 22:53:42.024508 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:42.240205 containerd[1458]: time="2024-11-12T22:53:42.240151641Z" level=error msg="Failed to destroy network for sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.240610 containerd[1458]: time="2024-11-12T22:53:42.240512406Z" level=error msg="encountered an error cleaning up failed sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.240610 containerd[1458]: time="2024-11-12T22:53:42.240569045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-mcmw7,Uid:7ca5a75a-2ac5-4580-98c4-4b88103a40c6,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.240854 kubelet[2683]: E1112 22:53:42.240803 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.240983 kubelet[2683]: E1112 22:53:42.240878 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" Nov 12 22:53:42.240983 kubelet[2683]: E1112 22:53:42.240900 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" Nov 12 22:53:42.240983 kubelet[2683]: E1112 22:53:42.240969 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849946d688-mcmw7_calico-apiserver(7ca5a75a-2ac5-4580-98c4-4b88103a40c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849946d688-mcmw7_calico-apiserver(7ca5a75a-2ac5-4580-98c4-4b88103a40c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" podUID="7ca5a75a-2ac5-4580-98c4-4b88103a40c6" Nov 12 22:53:42.308959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1-shm.mount: Deactivated successfully. Nov 12 22:53:42.321036 containerd[1458]: time="2024-11-12T22:53:42.320708845Z" level=error msg="Failed to destroy network for sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.321159 containerd[1458]: time="2024-11-12T22:53:42.321098296Z" level=error msg="encountered an error cleaning up failed sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.321189 containerd[1458]: time="2024-11-12T22:53:42.321173722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565bddf9d5-2stsd,Uid:e5c7450d-f473-4f6c-94c1-660f160a33e6,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.321791 kubelet[2683]: E1112 22:53:42.321430 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.321791 kubelet[2683]: E1112 22:53:42.321483 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" Nov 12 22:53:42.321791 kubelet[2683]: E1112 22:53:42.321503 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" Nov 12 22:53:42.336474 kubelet[2683]: E1112 22:53:42.321560 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-565bddf9d5-2stsd_calico-system(e5c7450d-f473-4f6c-94c1-660f160a33e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-565bddf9d5-2stsd_calico-system(e5c7450d-f473-4f6c-94c1-660f160a33e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" podUID="e5c7450d-f473-4f6c-94c1-660f160a33e6" Nov 12 22:53:42.324094 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b-shm.mount: Deactivated successfully. Nov 12 22:53:42.345197 containerd[1458]: time="2024-11-12T22:53:42.345149963Z" level=error msg="Failed to destroy network for sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.345621 containerd[1458]: time="2024-11-12T22:53:42.345557359Z" level=error msg="encountered an error cleaning up failed sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.345621 containerd[1458]: time="2024-11-12T22:53:42.345609399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-74ts5,Uid:f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.347306 kubelet[2683]: E1112 22:53:42.347279 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.348266 kubelet[2683]: E1112 22:53:42.347883 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" Nov 12 22:53:42.348266 kubelet[2683]: E1112 22:53:42.347912 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" Nov 12 22:53:42.348464 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea-shm.mount: Deactivated successfully. Nov 12 22:53:42.348706 kubelet[2683]: E1112 22:53:42.348591 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849946d688-74ts5_calico-apiserver(f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849946d688-74ts5_calico-apiserver(f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" podUID="f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8" Nov 12 22:53:42.353415 containerd[1458]: time="2024-11-12T22:53:42.353367041Z" level=error msg="Failed to destroy network for sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.354179 containerd[1458]: time="2024-11-12T22:53:42.353871684Z" level=error msg="encountered an error cleaning up failed sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.354179 containerd[1458]: time="2024-11-12T22:53:42.353943373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56scx,Uid:a9ecc475-91fc-4510-9d46-ca7309730f66,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.354356 kubelet[2683]: E1112 22:53:42.354240 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.354356 kubelet[2683]: E1112 22:53:42.354299 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56scx" Nov 12 22:53:42.354356 kubelet[2683]: E1112 22:53:42.354324 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56scx" Nov 12 22:53:42.354471 kubelet[2683]: E1112 22:53:42.354384 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56scx_kube-system(a9ecc475-91fc-4510-9d46-ca7309730f66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56scx_kube-system(a9ecc475-91fc-4510-9d46-ca7309730f66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56scx" podUID="a9ecc475-91fc-4510-9d46-ca7309730f66" Nov 12 22:53:42.355513 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8-shm.mount: Deactivated successfully. Nov 12 22:53:42.367903 containerd[1458]: time="2024-11-12T22:53:42.367833017Z" level=error msg="Failed to destroy network for sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.368413 containerd[1458]: time="2024-11-12T22:53:42.368362359Z" level=error msg="encountered an error cleaning up failed sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.368540 containerd[1458]: time="2024-11-12T22:53:42.368434669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7msrw,Uid:bf3a8091-a0f4-4679-8ae5-9dfbfe72d592,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.368762 kubelet[2683]: E1112 22:53:42.368714 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.368838 kubelet[2683]: E1112 22:53:42.368788 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7msrw" Nov 12 22:53:42.368838 kubelet[2683]: E1112 22:53:42.368816 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7msrw" Nov 12 22:53:42.368926 kubelet[2683]: E1112 22:53:42.368898 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7msrw_kube-system(bf3a8091-a0f4-4679-8ae5-9dfbfe72d592)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7msrw_kube-system(bf3a8091-a0f4-4679-8ae5-9dfbfe72d592)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7msrw" podUID="bf3a8091-a0f4-4679-8ae5-9dfbfe72d592" Nov 12 22:53:42.369901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0-shm.mount: Deactivated successfully. Nov 12 22:53:42.526229 containerd[1458]: time="2024-11-12T22:53:42.526173203Z" level=error msg="Failed to destroy network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.526625 containerd[1458]: time="2024-11-12T22:53:42.526592061Z" level=error msg="encountered an error cleaning up failed sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.526677 containerd[1458]: time="2024-11-12T22:53:42.526656967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.526994 kubelet[2683]: E1112 22:53:42.526958 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:42.527067 kubelet[2683]: E1112 22:53:42.527019 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:42.527105 kubelet[2683]: E1112 22:53:42.527069 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:42.527161 kubelet[2683]: E1112 22:53:42.527141 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghdrg_calico-system(fd0f5998-8c5a-42b9-a810-034dc8c3ba70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghdrg_calico-system(fd0f5998-8c5a-42b9-a810-034dc8c3ba70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:43.028044 kubelet[2683]: I1112 22:53:43.028011 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0" Nov 12 22:53:43.029597 containerd[1458]: time="2024-11-12T22:53:43.028829840Z" level=info msg="StopPodSandbox for \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\"" Nov 12 22:53:43.029597 containerd[1458]: time="2024-11-12T22:53:43.029139697Z" level=info msg="Ensure that sandbox 7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0 in task-service has been cleanup successfully" Nov 12 22:53:43.029597 containerd[1458]: time="2024-11-12T22:53:43.029475565Z" level=info msg="TearDown network for sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\" successfully" Nov 12 22:53:43.029597 containerd[1458]: time="2024-11-12T22:53:43.029488080Z" level=info msg="StopPodSandbox for \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\" returns successfully" Nov 12 22:53:43.029735 containerd[1458]: time="2024-11-12T22:53:43.029672274Z" level=info msg="StopPodSandbox for \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\"" Nov 12 22:53:43.029760 containerd[1458]: time="2024-11-12T22:53:43.029741017Z" level=info msg="TearDown network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" successfully" Nov 12 22:53:43.029760 containerd[1458]: time="2024-11-12T22:53:43.029750476Z" level=info msg="StopPodSandbox for \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" returns successfully" Nov 12 22:53:43.030004 kubelet[2683]: E1112 22:53:43.029986 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:43.031391 kubelet[2683]: I1112 22:53:43.031358 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e" Nov 12 22:53:43.031445 containerd[1458]: time="2024-11-12T22:53:43.031146337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7msrw,Uid:bf3a8091-a0f4-4679-8ae5-9dfbfe72d592,Namespace:kube-system,Attempt:2,}" Nov 12 22:53:43.039000 containerd[1458]: time="2024-11-12T22:53:43.038758336Z" level=info msg="StopPodSandbox for \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\"" Nov 12 22:53:43.039487 containerd[1458]: time="2024-11-12T22:53:43.039264222Z" level=info msg="Ensure that sandbox 0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e in task-service has been cleanup successfully" Nov 12 22:53:43.040000 kubelet[2683]: I1112 22:53:43.039958 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b" Nov 12 22:53:43.040424 containerd[1458]: time="2024-11-12T22:53:43.040395664Z" level=info msg="StopPodSandbox for \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\"" Nov 12 22:53:43.040641 containerd[1458]: time="2024-11-12T22:53:43.040608895Z" level=info msg="Ensure that sandbox ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b in task-service has been cleanup successfully" Nov 12 22:53:43.040920 containerd[1458]: time="2024-11-12T22:53:43.040891290Z" level=info msg="TearDown network for sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\" successfully" Nov 12 22:53:43.040920 containerd[1458]: time="2024-11-12T22:53:43.040909896Z" level=info msg="StopPodSandbox for \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\" returns successfully" Nov 12 22:53:43.041836 containerd[1458]: time="2024-11-12T22:53:43.041803208Z" level=info msg="StopPodSandbox for \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\"" Nov 12 22:53:43.042383 containerd[1458]: time="2024-11-12T22:53:43.041892401Z" level=info msg="TearDown network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" successfully" Nov 12 22:53:43.042383 containerd[1458]: time="2024-11-12T22:53:43.041906547Z" level=info msg="StopPodSandbox for \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" returns successfully" Nov 12 22:53:43.042383 containerd[1458]: time="2024-11-12T22:53:43.042305196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565bddf9d5-2stsd,Uid:e5c7450d-f473-4f6c-94c1-660f160a33e6,Namespace:calico-system,Attempt:2,}" Nov 12 22:53:43.043504 containerd[1458]: time="2024-11-12T22:53:43.043349971Z" level=info msg="TearDown network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" successfully" Nov 12 22:53:43.043504 containerd[1458]: time="2024-11-12T22:53:43.043368256Z" level=info msg="StopPodSandbox for \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" returns successfully" Nov 12 22:53:43.045541 containerd[1458]: time="2024-11-12T22:53:43.045505097Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\"" Nov 12 22:53:43.045674 containerd[1458]: time="2024-11-12T22:53:43.045584299Z" level=info msg="TearDown network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" successfully" Nov 12 22:53:43.045674 containerd[1458]: time="2024-11-12T22:53:43.045594269Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" returns successfully" Nov 12 22:53:43.046153 kubelet[2683]: I1112 22:53:43.045821 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1" Nov 12 22:53:43.046497 containerd[1458]: time="2024-11-12T22:53:43.046475478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:2,}" Nov 12 22:53:43.046744 containerd[1458]: time="2024-11-12T22:53:43.046724108Z" level=info msg="StopPodSandbox for \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\"" Nov 12 22:53:43.046984 containerd[1458]: time="2024-11-12T22:53:43.046966214Z" level=info msg="Ensure that sandbox 4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1 in task-service has been cleanup successfully" Nov 12 22:53:43.047260 containerd[1458]: time="2024-11-12T22:53:43.047243980Z" level=info msg="TearDown network for sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\" successfully" Nov 12 22:53:43.047327 containerd[1458]: time="2024-11-12T22:53:43.047313444Z" level=info msg="StopPodSandbox for \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\" returns successfully" Nov 12 22:53:43.048048 containerd[1458]: time="2024-11-12T22:53:43.048029225Z" level=info msg="StopPodSandbox for \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\"" Nov 12 22:53:43.048272 kubelet[2683]: I1112 22:53:43.048155 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8" Nov 12 22:53:43.049155 containerd[1458]: time="2024-11-12T22:53:43.049094709Z" level=info msg="StopPodSandbox for \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\"" Nov 12 22:53:43.049375 kubelet[2683]: I1112 22:53:43.049347 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea" Nov 12 22:53:43.049460 containerd[1458]: time="2024-11-12T22:53:43.049442239Z" level=info msg="Ensure that sandbox b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8 in task-service has been cleanup successfully" Nov 12 22:53:43.049752 containerd[1458]: time="2024-11-12T22:53:43.049702431Z" level=info msg="TearDown network for sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\" successfully" Nov 12 22:53:43.049752 containerd[1458]: time="2024-11-12T22:53:43.049716909Z" level=info msg="StopPodSandbox for \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\" returns successfully" Nov 12 22:53:43.052769 containerd[1458]: time="2024-11-12T22:53:43.052728487Z" level=info msg="StopPodSandbox for \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\"" Nov 12 22:53:43.052856 containerd[1458]: time="2024-11-12T22:53:43.052834651Z" level=info msg="TearDown network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" successfully" Nov 12 22:53:43.052916 containerd[1458]: time="2024-11-12T22:53:43.052853117Z" level=info msg="StopPodSandbox for \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" returns successfully" Nov 12 22:53:43.052951 containerd[1458]: time="2024-11-12T22:53:43.052914765Z" level=info msg="StopPodSandbox for \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\"" Nov 12 22:53:43.053182 containerd[1458]: time="2024-11-12T22:53:43.053158926Z" level=info msg="Ensure that sandbox 4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea in task-service has been cleanup successfully" Nov 12 22:53:43.055048 kubelet[2683]: E1112 22:53:43.055023 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:43.056388 containerd[1458]: time="2024-11-12T22:53:43.056351804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56scx,Uid:a9ecc475-91fc-4510-9d46-ca7309730f66,Namespace:kube-system,Attempt:2,}" Nov 12 22:53:43.056711 containerd[1458]: time="2024-11-12T22:53:43.056407241Z" level=info msg="TearDown network for sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\" successfully" Nov 12 22:53:43.056754 containerd[1458]: time="2024-11-12T22:53:43.056711357Z" level=info msg="StopPodSandbox for \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\" returns successfully" Nov 12 22:53:43.057067 containerd[1458]: time="2024-11-12T22:53:43.057031675Z" level=info msg="StopPodSandbox for \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\"" Nov 12 22:53:43.057161 containerd[1458]: time="2024-11-12T22:53:43.057141215Z" level=info msg="TearDown network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" successfully" Nov 12 22:53:43.057161 containerd[1458]: time="2024-11-12T22:53:43.057157758Z" level=info msg="StopPodSandbox for \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" returns successfully" Nov 12 22:53:43.057555 containerd[1458]: time="2024-11-12T22:53:43.057534153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-74ts5,Uid:f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8,Namespace:calico-apiserver,Attempt:2,}" Nov 12 22:53:43.303012 systemd[1]: run-netns-cni\x2ddc670955\x2d74dd\x2d38c1\x2deb33\x2df2afa0211ddf.mount: Deactivated successfully. Nov 12 22:53:43.303167 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e-shm.mount: Deactivated successfully. Nov 12 22:53:43.303266 systemd[1]: run-netns-cni\x2d355adb97\x2d7152\x2d36e2\x2de894\x2d1156d1424b51.mount: Deactivated successfully. Nov 12 22:53:43.303353 systemd[1]: run-netns-cni\x2da6e949c8\x2dcc25\x2d6f06\x2d8931\x2d71dc40da504f.mount: Deactivated successfully. Nov 12 22:53:43.303445 systemd[1]: run-netns-cni\x2d9cded2da\x2d0723\x2d02b9\x2d567a\x2defa60739ef1f.mount: Deactivated successfully. Nov 12 22:53:43.303537 systemd[1]: run-netns-cni\x2d26de318a\x2dcdf7\x2d12b2\x2d0d7e\x2db44b720dc8a7.mount: Deactivated successfully. Nov 12 22:53:43.303648 systemd[1]: run-netns-cni\x2dff2fb5ea\x2d4711\x2dd9b2\x2d9549\x2d4d3ad670c750.mount: Deactivated successfully. Nov 12 22:53:43.524862 containerd[1458]: time="2024-11-12T22:53:43.524810138Z" level=info msg="TearDown network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" successfully" Nov 12 22:53:43.524862 containerd[1458]: time="2024-11-12T22:53:43.524847080Z" level=info msg="StopPodSandbox for \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" returns successfully" Nov 12 22:53:43.525564 containerd[1458]: time="2024-11-12T22:53:43.525520548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-mcmw7,Uid:7ca5a75a-2ac5-4580-98c4-4b88103a40c6,Namespace:calico-apiserver,Attempt:2,}" Nov 12 22:53:44.970189 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:55878.service - OpenSSH per-connection server daemon (10.0.0.1:55878). Nov 12 22:53:45.027364 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 55878 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:53:45.029023 sshd-session[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:45.034519 systemd-logind[1437]: New session 13 of user core. Nov 12 22:53:45.040256 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 22:53:45.167680 sshd[4012]: Connection closed by 10.0.0.1 port 55878 Nov 12 22:53:45.168585 sshd-session[4010]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:45.173365 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Nov 12 22:53:45.173970 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:55878.service: Deactivated successfully. Nov 12 22:53:45.177430 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 22:53:45.178770 systemd-logind[1437]: Removed session 13. Nov 12 22:53:46.327758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479518343.mount: Deactivated successfully. Nov 12 22:53:47.789348 containerd[1458]: time="2024-11-12T22:53:47.789288365Z" level=error msg="Failed to destroy network for sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.789824 containerd[1458]: time="2024-11-12T22:53:47.789667775Z" level=error msg="encountered an error cleaning up failed sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.789824 containerd[1458]: time="2024-11-12T22:53:47.789719243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.790036 kubelet[2683]: E1112 22:53:47.790014 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.790330 kubelet[2683]: E1112 22:53:47.790071 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:47.790330 kubelet[2683]: E1112 22:53:47.790097 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:47.790330 kubelet[2683]: E1112 22:53:47.790158 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghdrg_calico-system(fd0f5998-8c5a-42b9-a810-034dc8c3ba70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghdrg_calico-system(fd0f5998-8c5a-42b9-a810-034dc8c3ba70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:47.823227 containerd[1458]: time="2024-11-12T22:53:47.823178095Z" level=error msg="Failed to destroy network for sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.823571 containerd[1458]: time="2024-11-12T22:53:47.823543798Z" level=error msg="encountered an error cleaning up failed sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.823623 containerd[1458]: time="2024-11-12T22:53:47.823605546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56scx,Uid:a9ecc475-91fc-4510-9d46-ca7309730f66,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.823851 kubelet[2683]: E1112 22:53:47.823820 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.823907 kubelet[2683]: E1112 22:53:47.823874 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56scx" Nov 12 22:53:47.823907 kubelet[2683]: E1112 22:53:47.823893 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56scx" Nov 12 22:53:47.823960 kubelet[2683]: E1112 22:53:47.823948 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56scx_kube-system(a9ecc475-91fc-4510-9d46-ca7309730f66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56scx_kube-system(a9ecc475-91fc-4510-9d46-ca7309730f66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56scx" podUID="a9ecc475-91fc-4510-9d46-ca7309730f66" Nov 12 22:53:47.944972 containerd[1458]: time="2024-11-12T22:53:47.944916957Z" level=error msg="Failed to destroy network for sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.945336 containerd[1458]: time="2024-11-12T22:53:47.945307288Z" level=error msg="encountered an error cleaning up failed sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.945404 containerd[1458]: time="2024-11-12T22:53:47.945369097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7msrw,Uid:bf3a8091-a0f4-4679-8ae5-9dfbfe72d592,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.945655 kubelet[2683]: E1112 22:53:47.945618 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.945722 kubelet[2683]: E1112 22:53:47.945680 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7msrw" Nov 12 22:53:47.945722 kubelet[2683]: E1112 22:53:47.945704 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7msrw" Nov 12 22:53:47.945781 kubelet[2683]: E1112 22:53:47.945758 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7msrw_kube-system(bf3a8091-a0f4-4679-8ae5-9dfbfe72d592)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7msrw_kube-system(bf3a8091-a0f4-4679-8ae5-9dfbfe72d592)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7msrw" podUID="bf3a8091-a0f4-4679-8ae5-9dfbfe72d592" Nov 12 22:53:47.951974 containerd[1458]: time="2024-11-12T22:53:47.951923250Z" level=error msg="Failed to destroy network for sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.952325 containerd[1458]: time="2024-11-12T22:53:47.952298412Z" level=error msg="encountered an error cleaning up failed sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.952371 containerd[1458]: time="2024-11-12T22:53:47.952354119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-74ts5,Uid:f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.952570 kubelet[2683]: E1112 22:53:47.952549 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.952639 kubelet[2683]: E1112 22:53:47.952600 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" Nov 12 22:53:47.952639 kubelet[2683]: E1112 22:53:47.952622 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" Nov 12 22:53:47.952695 kubelet[2683]: E1112 22:53:47.952673 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849946d688-74ts5_calico-apiserver(f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849946d688-74ts5_calico-apiserver(f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" podUID="f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8" Nov 12 22:53:47.971115 containerd[1458]: time="2024-11-12T22:53:47.971054730Z" level=error msg="Failed to destroy network for sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.971488 containerd[1458]: time="2024-11-12T22:53:47.971457825Z" level=error msg="encountered an error cleaning up failed sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.971525 containerd[1458]: time="2024-11-12T22:53:47.971513061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-mcmw7,Uid:7ca5a75a-2ac5-4580-98c4-4b88103a40c6,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.971812 kubelet[2683]: E1112 22:53:47.971775 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:47.971947 kubelet[2683]: E1112 22:53:47.971837 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" Nov 12 22:53:47.971947 kubelet[2683]: E1112 22:53:47.971871 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" Nov 12 22:53:47.971947 kubelet[2683]: E1112 22:53:47.971930 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849946d688-mcmw7_calico-apiserver(7ca5a75a-2ac5-4580-98c4-4b88103a40c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849946d688-mcmw7_calico-apiserver(7ca5a75a-2ac5-4580-98c4-4b88103a40c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" podUID="7ca5a75a-2ac5-4580-98c4-4b88103a40c6" Nov 12 22:53:47.985579 containerd[1458]: time="2024-11-12T22:53:47.985537480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:48.008384 containerd[1458]: time="2024-11-12T22:53:48.008314421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 22:53:48.010145 containerd[1458]: time="2024-11-12T22:53:48.010086206Z" level=error msg="Failed to destroy network for sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:48.010516 containerd[1458]: time="2024-11-12T22:53:48.010480584Z" level=error msg="encountered an error cleaning up failed sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:48.010562 containerd[1458]: time="2024-11-12T22:53:48.010541852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565bddf9d5-2stsd,Uid:e5c7450d-f473-4f6c-94c1-660f160a33e6,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:48.010815 kubelet[2683]: E1112 22:53:48.010781 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:48.010877 kubelet[2683]: E1112 22:53:48.010845 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" Nov 12 22:53:48.010877 kubelet[2683]: E1112 22:53:48.010866 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" Nov 12 22:53:48.010930 kubelet[2683]: E1112 22:53:48.010924 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-565bddf9d5-2stsd_calico-system(e5c7450d-f473-4f6c-94c1-660f160a33e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-565bddf9d5-2stsd_calico-system(e5c7450d-f473-4f6c-94c1-660f160a33e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" podUID="e5c7450d-f473-4f6c-94c1-660f160a33e6" Nov 12 22:53:48.022041 containerd[1458]: time="2024-11-12T22:53:48.021997416Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:48.032998 containerd[1458]: time="2024-11-12T22:53:48.032956065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:48.033529 containerd[1458]: time="2024-11-12T22:53:48.033499480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 9.056841944s" Nov 12 22:53:48.033529 containerd[1458]: time="2024-11-12T22:53:48.033525279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 22:53:48.040428 containerd[1458]: time="2024-11-12T22:53:48.040341188Z" level=info msg="CreateContainer within sandbox \"cf334eafbb3a34e6d64ffd2b36ab60ab2917faae737b6e458da57652b77df570\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 22:53:48.058843 kubelet[2683]: I1112 22:53:48.058811 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139" Nov 12 22:53:48.059226 containerd[1458]: time="2024-11-12T22:53:48.059194639Z" level=info msg="StopPodSandbox for \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\"" Nov 12 22:53:48.059384 containerd[1458]: time="2024-11-12T22:53:48.059369004Z" level=info msg="Ensure that sandbox 73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139 in task-service has been cleanup successfully" Nov 12 22:53:48.059559 containerd[1458]: time="2024-11-12T22:53:48.059536887Z" level=info msg="TearDown network for sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\" successfully" Nov 12 22:53:48.059559 containerd[1458]: time="2024-11-12T22:53:48.059550664Z" level=info msg="StopPodSandbox for \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\" returns successfully" Nov 12 22:53:48.059855 containerd[1458]: time="2024-11-12T22:53:48.059835682Z" level=info msg="StopPodSandbox for \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\"" Nov 12 22:53:48.059936 containerd[1458]: time="2024-11-12T22:53:48.059915845Z" level=info msg="TearDown network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" successfully" Nov 12 22:53:48.059936 containerd[1458]: time="2024-11-12T22:53:48.059931155Z" level=info msg="StopPodSandbox for \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" returns successfully" Nov 12 22:53:48.060159 kubelet[2683]: I1112 22:53:48.060137 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1" Nov 12 22:53:48.060249 containerd[1458]: time="2024-11-12T22:53:48.060227935Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\"" Nov 12 22:53:48.060319 containerd[1458]: time="2024-11-12T22:53:48.060303751Z" level=info msg="TearDown network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" successfully" Nov 12 22:53:48.060440 containerd[1458]: time="2024-11-12T22:53:48.060317297Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" returns successfully" Nov 12 22:53:48.060440 containerd[1458]: time="2024-11-12T22:53:48.060411227Z" level=info msg="StopPodSandbox for \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\"" Nov 12 22:53:48.060552 containerd[1458]: time="2024-11-12T22:53:48.060534625Z" level=info msg="Ensure that sandbox c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1 in task-service has been cleanup successfully" Nov 12 22:53:48.060685 containerd[1458]: time="2024-11-12T22:53:48.060668201Z" level=info msg="TearDown network for sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\" successfully" Nov 12 22:53:48.060685 containerd[1458]: time="2024-11-12T22:53:48.060681738Z" level=info msg="StopPodSandbox for \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\" returns successfully" Nov 12 22:53:48.060790 containerd[1458]: time="2024-11-12T22:53:48.060754948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:3,}" Nov 12 22:53:48.061048 containerd[1458]: time="2024-11-12T22:53:48.061025046Z" level=info msg="StopPodSandbox for \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\"" Nov 12 22:53:48.061117 containerd[1458]: time="2024-11-12T22:53:48.061100702Z" level=info msg="TearDown network for sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\" successfully" Nov 12 22:53:48.061117 containerd[1458]: time="2024-11-12T22:53:48.061113767Z" level=info msg="StopPodSandbox for \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\" returns successfully" Nov 12 22:53:48.061304 containerd[1458]: time="2024-11-12T22:53:48.061285868Z" level=info msg="StopPodSandbox for \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\"" Nov 12 22:53:48.061410 containerd[1458]: time="2024-11-12T22:53:48.061354250Z" level=info msg="TearDown network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" successfully" Nov 12 22:53:48.061410 containerd[1458]: time="2024-11-12T22:53:48.061364639Z" level=info msg="StopPodSandbox for \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" returns successfully" Nov 12 22:53:48.062072 kubelet[2683]: I1112 22:53:48.061771 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307" Nov 12 22:53:48.062111 containerd[1458]: time="2024-11-12T22:53:48.061806398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565bddf9d5-2stsd,Uid:e5c7450d-f473-4f6c-94c1-660f160a33e6,Namespace:calico-system,Attempt:3,}" Nov 12 22:53:48.062161 containerd[1458]: time="2024-11-12T22:53:48.062118208Z" level=info msg="StopPodSandbox for \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\"" Nov 12 22:53:48.062283 containerd[1458]: time="2024-11-12T22:53:48.062263608Z" level=info msg="Ensure that sandbox 99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307 in task-service has been cleanup successfully" Nov 12 22:53:48.062456 containerd[1458]: time="2024-11-12T22:53:48.062437552Z" level=info msg="TearDown network for sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\" successfully" Nov 12 22:53:48.062456 containerd[1458]: time="2024-11-12T22:53:48.062451268Z" level=info msg="StopPodSandbox for \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\" returns successfully" Nov 12 22:53:48.062660 containerd[1458]: time="2024-11-12T22:53:48.062644158Z" level=info msg="StopPodSandbox for \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\"" Nov 12 22:53:48.062728 containerd[1458]: time="2024-11-12T22:53:48.062715144Z" level=info msg="TearDown network for sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\" successfully" Nov 12 22:53:48.062754 containerd[1458]: time="2024-11-12T22:53:48.062727138Z" level=info msg="StopPodSandbox for \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\" returns successfully" Nov 12 22:53:48.063093 containerd[1458]: time="2024-11-12T22:53:48.062953723Z" level=info msg="StopPodSandbox for \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\"" Nov 12 22:53:48.063093 containerd[1458]: time="2024-11-12T22:53:48.063037965Z" level=info msg="TearDown network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" successfully" Nov 12 22:53:48.063093 containerd[1458]: time="2024-11-12T22:53:48.063048145Z" level=info msg="StopPodSandbox for \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" returns successfully" Nov 12 22:53:48.063314 containerd[1458]: time="2024-11-12T22:53:48.063296322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-mcmw7,Uid:7ca5a75a-2ac5-4580-98c4-4b88103a40c6,Namespace:calico-apiserver,Attempt:3,}" Nov 12 22:53:48.063535 kubelet[2683]: I1112 22:53:48.063519 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35" Nov 12 22:53:48.063868 containerd[1458]: time="2024-11-12T22:53:48.063846499Z" level=info msg="StopPodSandbox for \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\"" Nov 12 22:53:48.064035 containerd[1458]: time="2024-11-12T22:53:48.064009953Z" level=info msg="Ensure that sandbox 855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35 in task-service has been cleanup successfully" Nov 12 22:53:48.064276 containerd[1458]: time="2024-11-12T22:53:48.064257389Z" level=info msg="TearDown network for sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\" successfully" Nov 12 22:53:48.064335 containerd[1458]: time="2024-11-12T22:53:48.064280003Z" level=info msg="StopPodSandbox for \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\" returns successfully" Nov 12 22:53:48.064709 containerd[1458]: time="2024-11-12T22:53:48.064691363Z" level=info msg="StopPodSandbox for \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\"" Nov 12 22:53:48.064776 containerd[1458]: time="2024-11-12T22:53:48.064760856Z" level=info msg="TearDown network for sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\" successfully" Nov 12 22:53:48.064776 containerd[1458]: time="2024-11-12T22:53:48.064772969Z" level=info msg="StopPodSandbox for \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\" returns successfully" Nov 12 22:53:48.064839 kubelet[2683]: I1112 22:53:48.064820 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f" Nov 12 22:53:48.065532 containerd[1458]: time="2024-11-12T22:53:48.065119134Z" level=info msg="StopPodSandbox for \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\"" Nov 12 22:53:48.065532 containerd[1458]: time="2024-11-12T22:53:48.065180432Z" level=info msg="StopPodSandbox for \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\"" Nov 12 22:53:48.065532 containerd[1458]: time="2024-11-12T22:53:48.065257421Z" level=info msg="TearDown network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" successfully" Nov 12 22:53:48.065532 containerd[1458]: time="2024-11-12T22:53:48.065267520Z" level=info msg="StopPodSandbox for \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" returns successfully" Nov 12 22:53:48.065532 containerd[1458]: time="2024-11-12T22:53:48.065272199Z" level=info msg="Ensure that sandbox 0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f in task-service has been cleanup successfully" Nov 12 22:53:48.065681 kubelet[2683]: E1112 22:53:48.065459 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:48.065708 containerd[1458]: time="2024-11-12T22:53:48.065602845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56scx,Uid:a9ecc475-91fc-4510-9d46-ca7309730f66,Namespace:kube-system,Attempt:3,}" Nov 12 22:53:48.065783 containerd[1458]: time="2024-11-12T22:53:48.065761930Z" level=info msg="TearDown network for sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\" successfully" Nov 12 22:53:48.065783 containerd[1458]: time="2024-11-12T22:53:48.065778803Z" level=info msg="StopPodSandbox for \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\" returns successfully" Nov 12 22:53:48.066074 containerd[1458]: time="2024-11-12T22:53:48.066037310Z" level=info msg="StopPodSandbox for \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\"" Nov 12 22:53:48.066159 containerd[1458]: time="2024-11-12T22:53:48.066139776Z" level=info msg="TearDown network for sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\" successfully" Nov 12 22:53:48.066159 containerd[1458]: time="2024-11-12T22:53:48.066156358Z" level=info msg="StopPodSandbox for \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\" returns successfully" Nov 12 22:53:48.066357 containerd[1458]: time="2024-11-12T22:53:48.066334520Z" level=info msg="StopPodSandbox for \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\"" Nov 12 22:53:48.066438 containerd[1458]: time="2024-11-12T22:53:48.066421167Z" level=info msg="TearDown network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" successfully" Nov 12 22:53:48.066483 containerd[1458]: time="2024-11-12T22:53:48.066436997Z" level=info msg="StopPodSandbox for \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" returns successfully" Nov 12 22:53:48.066553 kubelet[2683]: I1112 22:53:48.066539 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434" Nov 12 22:53:48.066792 containerd[1458]: time="2024-11-12T22:53:48.066758445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-74ts5,Uid:f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8,Namespace:calico-apiserver,Attempt:3,}" Nov 12 22:53:48.066880 containerd[1458]: time="2024-11-12T22:53:48.066861053Z" level=info msg="StopPodSandbox for \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\"" Nov 12 22:53:48.067041 containerd[1458]: time="2024-11-12T22:53:48.067018765Z" level=info msg="Ensure that sandbox 55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434 in task-service has been cleanup successfully" Nov 12 22:53:48.067189 containerd[1458]: time="2024-11-12T22:53:48.067170597Z" level=info msg="TearDown network for sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\" successfully" Nov 12 22:53:48.067224 containerd[1458]: time="2024-11-12T22:53:48.067187750Z" level=info msg="StopPodSandbox for \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\" returns successfully" Nov 12 22:53:48.067430 containerd[1458]: time="2024-11-12T22:53:48.067410658Z" level=info msg="StopPodSandbox for \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\"" Nov 12 22:53:48.067505 containerd[1458]: time="2024-11-12T22:53:48.067489671Z" level=info msg="TearDown network for sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\" successfully" Nov 12 22:53:48.067534 containerd[1458]: time="2024-11-12T22:53:48.067504298Z" level=info msg="StopPodSandbox for \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\" returns successfully" Nov 12 22:53:48.067721 containerd[1458]: time="2024-11-12T22:53:48.067695175Z" level=info msg="StopPodSandbox for \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\"" Nov 12 22:53:48.067773 containerd[1458]: time="2024-11-12T22:53:48.067760420Z" level=info msg="TearDown network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" successfully" Nov 12 22:53:48.067773 containerd[1458]: time="2024-11-12T22:53:48.067770359Z" level=info msg="StopPodSandbox for \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" returns successfully" Nov 12 22:53:48.067946 kubelet[2683]: E1112 22:53:48.067930 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:48.068142 containerd[1458]: time="2024-11-12T22:53:48.068110233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7msrw,Uid:bf3a8091-a0f4-4679-8ae5-9dfbfe72d592,Namespace:kube-system,Attempt:3,}" Nov 12 22:53:48.658895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307-shm.mount: Deactivated successfully. Nov 12 22:53:48.659008 systemd[1]: run-netns-cni\x2d3b2eca67\x2daf6c\x2d0a14\x2dbce6\x2d8ebe50a10ed4.mount: Deactivated successfully. Nov 12 22:53:48.659086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f-shm.mount: Deactivated successfully. Nov 12 22:53:48.659170 systemd[1]: run-netns-cni\x2ded9fd181\x2d25e9\x2dc094\x2d8604\x2dd3d5e7f6bf0d.mount: Deactivated successfully. Nov 12 22:53:48.659239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434-shm.mount: Deactivated successfully. Nov 12 22:53:48.659308 systemd[1]: run-netns-cni\x2d7a3b3569\x2df32d\x2dbf65\x2d8852\x2de7d6eb3327d9.mount: Deactivated successfully. Nov 12 22:53:48.659372 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35-shm.mount: Deactivated successfully. Nov 12 22:53:48.659442 systemd[1]: run-netns-cni\x2df206db10\x2d06f3\x2d94f0\x2ddec9\x2df197ff69d042.mount: Deactivated successfully. Nov 12 22:53:48.659512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139-shm.mount: Deactivated successfully. Nov 12 22:53:49.133778 containerd[1458]: time="2024-11-12T22:53:49.133716229Z" level=info msg="CreateContainer within sandbox \"cf334eafbb3a34e6d64ffd2b36ab60ab2917faae737b6e458da57652b77df570\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"53e34aa63bc61ee6c6c26b5821999d176e095d06404e2c906aed6153dccfb0b8\"" Nov 12 22:53:49.134290 containerd[1458]: time="2024-11-12T22:53:49.134272778Z" level=info msg="StartContainer for \"53e34aa63bc61ee6c6c26b5821999d176e095d06404e2c906aed6153dccfb0b8\"" Nov 12 22:53:49.210373 systemd[1]: Started cri-containerd-53e34aa63bc61ee6c6c26b5821999d176e095d06404e2c906aed6153dccfb0b8.scope - libcontainer container 53e34aa63bc61ee6c6c26b5821999d176e095d06404e2c906aed6153dccfb0b8. Nov 12 22:53:49.238657 containerd[1458]: time="2024-11-12T22:53:49.238607615Z" level=error msg="Failed to destroy network for sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:49.239035 containerd[1458]: time="2024-11-12T22:53:49.239009137Z" level=error msg="encountered an error cleaning up failed sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:49.239116 containerd[1458]: time="2024-11-12T22:53:49.239076306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:49.239344 kubelet[2683]: E1112 22:53:49.239311 2683 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:53:49.239624 kubelet[2683]: E1112 22:53:49.239380 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:49.239624 kubelet[2683]: E1112 22:53:49.239405 2683 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdrg" Nov 12 22:53:49.239624 kubelet[2683]: E1112 22:53:49.239474 2683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghdrg_calico-system(fd0f5998-8c5a-42b9-a810-034dc8c3ba70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghdrg_calico-system(fd0f5998-8c5a-42b9-a810-034dc8c3ba70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghdrg" podUID="fd0f5998-8c5a-42b9-a810-034dc8c3ba70" Nov 12 22:53:49.503986 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 22:53:49.504892 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 22:53:49.512927 containerd[1458]: time="2024-11-12T22:53:49.512864364Z" level=info msg="StartContainer for \"53e34aa63bc61ee6c6c26b5821999d176e095d06404e2c906aed6153dccfb0b8\" returns successfully" Nov 12 22:53:49.662162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4-shm.mount: Deactivated successfully. Nov 12 22:53:50.080761 kubelet[2683]: E1112 22:53:50.080733 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:50.084234 kubelet[2683]: I1112 22:53:50.084160 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4" Nov 12 22:53:50.084639 containerd[1458]: time="2024-11-12T22:53:50.084563451Z" level=info msg="StopPodSandbox for \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\"" Nov 12 22:53:50.095669 containerd[1458]: time="2024-11-12T22:53:50.094680309Z" level=info msg="Ensure that sandbox 45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4 in task-service has been cleanup successfully" Nov 12 22:53:50.096730 containerd[1458]: time="2024-11-12T22:53:50.096701779Z" level=info msg="TearDown network for sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\" successfully" Nov 12 22:53:50.096791 containerd[1458]: time="2024-11-12T22:53:50.096729191Z" level=info msg="StopPodSandbox for \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\" returns successfully" Nov 12 22:53:50.106202 containerd[1458]: time="2024-11-12T22:53:50.105971681Z" level=info msg="StopPodSandbox for \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\"" Nov 12 22:53:50.106741 systemd[1]: run-netns-cni\x2d0964d977\x2db8a4\x2d31ba\x2d4d37\x2da72ee14ffe30.mount: Deactivated successfully. Nov 12 22:53:50.114648 containerd[1458]: time="2024-11-12T22:53:50.113436258Z" level=info msg="TearDown network for sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\" successfully" Nov 12 22:53:50.114648 containerd[1458]: time="2024-11-12T22:53:50.113472147Z" level=info msg="StopPodSandbox for \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\" returns successfully" Nov 12 22:53:50.114648 containerd[1458]: time="2024-11-12T22:53:50.114417131Z" level=info msg="StopPodSandbox for \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\"" Nov 12 22:53:50.114822 containerd[1458]: time="2024-11-12T22:53:50.114743027Z" level=info msg="TearDown network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" successfully" Nov 12 22:53:50.114822 containerd[1458]: time="2024-11-12T22:53:50.114757013Z" level=info msg="StopPodSandbox for \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" returns successfully" Nov 12 22:53:50.115979 containerd[1458]: time="2024-11-12T22:53:50.115953099Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\"" Nov 12 22:53:50.116226 containerd[1458]: time="2024-11-12T22:53:50.116195384Z" level=info msg="TearDown network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" successfully" Nov 12 22:53:50.116345 containerd[1458]: time="2024-11-12T22:53:50.116326195Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" returns successfully" Nov 12 22:53:50.117121 containerd[1458]: time="2024-11-12T22:53:50.116990029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:4,}" Nov 12 22:53:50.122416 kubelet[2683]: I1112 22:53:50.122363 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-zws85" podStartSLOduration=3.152920872 podStartE2EDuration="31.122293999s" podCreationTimestamp="2024-11-12 22:53:19 +0000 UTC" firstStartedPulling="2024-11-12 22:53:20.064324453 +0000 UTC m=+23.328048007" lastFinishedPulling="2024-11-12 22:53:48.03369758 +0000 UTC m=+51.297421134" observedRunningTime="2024-11-12 22:53:50.119626479 +0000 UTC m=+53.383350043" watchObservedRunningTime="2024-11-12 22:53:50.122293999 +0000 UTC m=+53.386017553" Nov 12 22:53:50.186884 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:55790.service - OpenSSH per-connection server daemon (10.0.0.1:55790). Nov 12 22:53:50.273516 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 55790 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:53:50.275724 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:50.280699 systemd-logind[1437]: New session 14 of user core. Nov 12 22:53:50.287266 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 22:53:50.370246 systemd-networkd[1394]: cali0947f0b1d70: Link UP Nov 12 22:53:50.370433 systemd-networkd[1394]: cali0947f0b1d70: Gained carrier Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.000 [INFO][4368] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.041 [INFO][4368] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--56scx-eth0 coredns-76f75df574- kube-system a9ecc475-91fc-4510-9d46-ca7309730f66 839 0 2024-11-12 22:53:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-56scx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0947f0b1d70 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Namespace="kube-system" Pod="coredns-76f75df574-56scx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--56scx-" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.041 [INFO][4368] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Namespace="kube-system" Pod="coredns-76f75df574-56scx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--56scx-eth0" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.220 [INFO][4415] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" HandleID="k8s-pod-network.cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Workload="localhost-k8s-coredns--76f75df574--56scx-eth0" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.295 [INFO][4415] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" HandleID="k8s-pod-network.cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Workload="localhost-k8s-coredns--76f75df574--56scx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051510), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-56scx", "timestamp":"2024-11-12 22:53:50.220249591 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.295 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.296 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.296 [INFO][4415] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.299 [INFO][4415] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" host="localhost" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.304 [INFO][4415] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.309 [INFO][4415] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.310 [INFO][4415] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.323 [INFO][4415] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.323 [INFO][4415] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" host="localhost" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.327 [INFO][4415] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555 Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.346 [INFO][4415] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" host="localhost" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.358 [INFO][4415] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" host="localhost" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.358 [INFO][4415] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" host="localhost" Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.358 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:53:50.391790 containerd[1458]: 2024-11-12 22:53:50.358 [INFO][4415] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" HandleID="k8s-pod-network.cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Workload="localhost-k8s-coredns--76f75df574--56scx-eth0" Nov 12 22:53:50.392641 containerd[1458]: 2024-11-12 22:53:50.362 [INFO][4368] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Namespace="kube-system" Pod="coredns-76f75df574-56scx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--56scx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--56scx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a9ecc475-91fc-4510-9d46-ca7309730f66", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-56scx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0947f0b1d70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.392641 containerd[1458]: 2024-11-12 22:53:50.362 [INFO][4368] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Namespace="kube-system" Pod="coredns-76f75df574-56scx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--56scx-eth0" Nov 12 22:53:50.392641 containerd[1458]: 2024-11-12 22:53:50.362 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0947f0b1d70 ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Namespace="kube-system" Pod="coredns-76f75df574-56scx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--56scx-eth0" Nov 12 22:53:50.392641 containerd[1458]: 2024-11-12 22:53:50.370 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Namespace="kube-system" Pod="coredns-76f75df574-56scx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--56scx-eth0" Nov 12 22:53:50.392641 containerd[1458]: 2024-11-12 22:53:50.370 [INFO][4368] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Namespace="kube-system" Pod="coredns-76f75df574-56scx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--56scx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--56scx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a9ecc475-91fc-4510-9d46-ca7309730f66", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555", Pod:"coredns-76f75df574-56scx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0947f0b1d70", MAC:"9a:e2:cd:1c:93:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.392641 containerd[1458]: 2024-11-12 22:53:50.386 [INFO][4368] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555" Namespace="kube-system" Pod="coredns-76f75df574-56scx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--56scx-eth0" Nov 12 22:53:50.446586 sshd[4488]: Connection closed by 10.0.0.1 port 55790 Nov 12 22:53:50.448369 systemd-networkd[1394]: calie027af70a47: Link UP Nov 12 22:53:50.449322 sshd-session[4463]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:50.449330 systemd-networkd[1394]: calie027af70a47: Gained carrier Nov 12 22:53:50.458726 containerd[1458]: time="2024-11-12T22:53:50.456890640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:50.458726 containerd[1458]: time="2024-11-12T22:53:50.456939855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:50.458726 containerd[1458]: time="2024-11-12T22:53:50.456954362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.458726 containerd[1458]: time="2024-11-12T22:53:50.457043714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.457488 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:55790.service: Deactivated successfully. Nov 12 22:53:50.461079 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 22:53:50.463534 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Nov 12 22:53:50.467248 systemd-logind[1437]: Removed session 14. Nov 12 22:53:50.474564 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:55792.service - OpenSSH per-connection server daemon (10.0.0.1:55792). Nov 12 22:53:50.478292 systemd[1]: Started cri-containerd-cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555.scope - libcontainer container cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555. Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:49.948 [INFO][4353] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.035 [INFO][4353] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0 calico-apiserver-849946d688- calico-apiserver 7ca5a75a-2ac5-4580-98c4-4b88103a40c6 836 0 2024-11-12 22:53:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849946d688 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-849946d688-mcmw7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie027af70a47 [] []}} ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-mcmw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--mcmw7-" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.035 [INFO][4353] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-mcmw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.219 [INFO][4414] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" HandleID="k8s-pod-network.27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Workload="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.300 [INFO][4414] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" HandleID="k8s-pod-network.27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Workload="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e0680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-849946d688-mcmw7", "timestamp":"2024-11-12 22:53:50.219489693 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.300 [INFO][4414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.358 [INFO][4414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.358 [INFO][4414] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.387 [INFO][4414] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" host="localhost" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.399 [INFO][4414] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.406 [INFO][4414] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.407 [INFO][4414] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.409 [INFO][4414] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.409 [INFO][4414] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" host="localhost" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.411 [INFO][4414] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1 Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.418 [INFO][4414] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" host="localhost" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.440 [INFO][4414] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" host="localhost" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.440 [INFO][4414] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" host="localhost" Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.440 [INFO][4414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:53:50.488861 containerd[1458]: 2024-11-12 22:53:50.440 [INFO][4414] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" HandleID="k8s-pod-network.27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Workload="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" Nov 12 22:53:50.490034 containerd[1458]: 2024-11-12 22:53:50.444 [INFO][4353] cni-plugin/k8s.go 386: Populated endpoint ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-mcmw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0", GenerateName:"calico-apiserver-849946d688-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ca5a75a-2ac5-4580-98c4-4b88103a40c6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849946d688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-849946d688-mcmw7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie027af70a47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.490034 containerd[1458]: 2024-11-12 22:53:50.445 [INFO][4353] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-mcmw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" Nov 12 22:53:50.490034 containerd[1458]: 2024-11-12 22:53:50.445 [INFO][4353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie027af70a47 ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-mcmw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" Nov 12 22:53:50.490034 containerd[1458]: 2024-11-12 22:53:50.449 [INFO][4353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-mcmw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" Nov 12 22:53:50.490034 containerd[1458]: 2024-11-12 22:53:50.450 [INFO][4353] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-mcmw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0", GenerateName:"calico-apiserver-849946d688-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ca5a75a-2ac5-4580-98c4-4b88103a40c6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849946d688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1", Pod:"calico-apiserver-849946d688-mcmw7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie027af70a47", MAC:"e6:67:c5:6a:91:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.490034 containerd[1458]: 2024-11-12 22:53:50.485 [INFO][4353] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-mcmw7" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--mcmw7-eth0" Nov 12 22:53:50.499193 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:53:50.513049 sshd[4563]: Accepted publickey for core from 10.0.0.1 port 55792 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:53:50.514648 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:50.520733 systemd-logind[1437]: New session 15 of user core. Nov 12 22:53:50.526743 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 22:53:50.551430 containerd[1458]: time="2024-11-12T22:53:50.551343468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:50.551567 containerd[1458]: time="2024-11-12T22:53:50.551447318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:50.551824 containerd[1458]: time="2024-11-12T22:53:50.551538183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.553509 containerd[1458]: time="2024-11-12T22:53:50.553447176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.556186 systemd-networkd[1394]: calidd6f3b67a01: Link UP Nov 12 22:53:50.557162 systemd-networkd[1394]: calidd6f3b67a01: Gained carrier Nov 12 22:53:50.560559 containerd[1458]: time="2024-11-12T22:53:50.560518640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56scx,Uid:a9ecc475-91fc-4510-9d46-ca7309730f66,Namespace:kube-system,Attempt:3,} returns sandbox id \"cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555\"" Nov 12 22:53:50.562215 kubelet[2683]: E1112 22:53:50.561837 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:50.569219 containerd[1458]: time="2024-11-12T22:53:50.568197968Z" level=info msg="CreateContainer within sandbox \"cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:53:50.581589 systemd[1]: Started cri-containerd-27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1.scope - libcontainer container 27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1. Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.015 [INFO][4381] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.038 [INFO][4381] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--849946d688--74ts5-eth0 calico-apiserver-849946d688- calico-apiserver f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8 838 0 2024-11-12 22:53:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849946d688 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-849946d688-74ts5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd6f3b67a01 [] []}} ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-74ts5" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--74ts5-" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.038 [INFO][4381] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-74ts5" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.220 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" HandleID="k8s-pod-network.cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Workload="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.300 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" HandleID="k8s-pod-network.cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Workload="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e6aa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-849946d688-74ts5", "timestamp":"2024-11-12 22:53:50.220906081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.300 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.440 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.440 [INFO][4411] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.445 [INFO][4411] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" host="localhost" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.487 [INFO][4411] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.494 [INFO][4411] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.497 [INFO][4411] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.502 [INFO][4411] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.502 [INFO][4411] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" host="localhost" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.506 [INFO][4411] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393 Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.521 [INFO][4411] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" host="localhost" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.537 [INFO][4411] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" host="localhost" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.538 [INFO][4411] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" host="localhost" Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.538 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:53:50.588211 containerd[1458]: 2024-11-12 22:53:50.538 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" HandleID="k8s-pod-network.cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Workload="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" Nov 12 22:53:50.594018 containerd[1458]: 2024-11-12 22:53:50.551 [INFO][4381] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-74ts5" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849946d688--74ts5-eth0", GenerateName:"calico-apiserver-849946d688-", Namespace:"calico-apiserver", SelfLink:"", UID:"f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849946d688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-849946d688-74ts5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd6f3b67a01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.594018 containerd[1458]: 2024-11-12 22:53:50.551 [INFO][4381] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-74ts5" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" Nov 12 22:53:50.594018 containerd[1458]: 2024-11-12 22:53:50.551 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd6f3b67a01 ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-74ts5" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" Nov 12 22:53:50.594018 containerd[1458]: 2024-11-12 22:53:50.557 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-74ts5" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" Nov 12 22:53:50.594018 containerd[1458]: 2024-11-12 22:53:50.558 [INFO][4381] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-74ts5" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849946d688--74ts5-eth0", GenerateName:"calico-apiserver-849946d688-", Namespace:"calico-apiserver", SelfLink:"", UID:"f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849946d688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393", Pod:"calico-apiserver-849946d688-74ts5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd6f3b67a01", MAC:"1e:23:54:98:77:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.594018 containerd[1458]: 2024-11-12 22:53:50.582 [INFO][4381] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393" Namespace="calico-apiserver" Pod="calico-apiserver-849946d688-74ts5" WorkloadEndpoint="localhost-k8s-calico--apiserver--849946d688--74ts5-eth0" Nov 12 22:53:50.605724 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:53:50.635000 containerd[1458]: time="2024-11-12T22:53:50.634882610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-mcmw7,Uid:7ca5a75a-2ac5-4580-98c4-4b88103a40c6,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1\"" Nov 12 22:53:50.639483 containerd[1458]: time="2024-11-12T22:53:50.639374993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 22:53:50.666424 systemd[1]: run-containerd-runc-k8s.io-53e34aa63bc61ee6c6c26b5821999d176e095d06404e2c906aed6153dccfb0b8-runc.jwKaPn.mount: Deactivated successfully. Nov 12 22:53:50.676705 containerd[1458]: time="2024-11-12T22:53:50.676596744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:50.676705 containerd[1458]: time="2024-11-12T22:53:50.676667711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:50.676705 containerd[1458]: time="2024-11-12T22:53:50.676679243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.676921 containerd[1458]: time="2024-11-12T22:53:50.676768264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.706372 systemd[1]: Started cri-containerd-cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393.scope - libcontainer container cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393. Nov 12 22:53:50.713709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223723505.mount: Deactivated successfully. Nov 12 22:53:50.717942 systemd-networkd[1394]: califf5d6fdc1d4: Link UP Nov 12 22:53:50.718893 systemd-networkd[1394]: califf5d6fdc1d4: Gained carrier Nov 12 22:53:50.739614 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.069 [INFO][4399] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.103 [INFO][4399] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--7msrw-eth0 coredns-76f75df574- kube-system bf3a8091-a0f4-4679-8ae5-9dfbfe72d592 830 0 2024-11-12 22:53:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-7msrw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf5d6fdc1d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Namespace="kube-system" Pod="coredns-76f75df574-7msrw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--7msrw-" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.103 [INFO][4399] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Namespace="kube-system" Pod="coredns-76f75df574-7msrw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--7msrw-eth0" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.225 [INFO][4438] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" HandleID="k8s-pod-network.4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Workload="localhost-k8s-coredns--76f75df574--7msrw-eth0" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.300 [INFO][4438] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" HandleID="k8s-pod-network.4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Workload="localhost-k8s-coredns--76f75df574--7msrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365c90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-7msrw", "timestamp":"2024-11-12 22:53:50.225332356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.301 [INFO][4438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.538 [INFO][4438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.539 [INFO][4438] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.549 [INFO][4438] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" host="localhost" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.584 [INFO][4438] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.601 [INFO][4438] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.604 [INFO][4438] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.607 [INFO][4438] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.607 [INFO][4438] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" host="localhost" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.609 [INFO][4438] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.666 [INFO][4438] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" host="localhost" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.697 [INFO][4438] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" host="localhost" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.697 [INFO][4438] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" host="localhost" Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.697 [INFO][4438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:53:50.750732 containerd[1458]: 2024-11-12 22:53:50.697 [INFO][4438] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" HandleID="k8s-pod-network.4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Workload="localhost-k8s-coredns--76f75df574--7msrw-eth0" Nov 12 22:53:50.751472 containerd[1458]: 2024-11-12 22:53:50.705 [INFO][4399] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Namespace="kube-system" Pod="coredns-76f75df574-7msrw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--7msrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--7msrw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bf3a8091-a0f4-4679-8ae5-9dfbfe72d592", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-7msrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf5d6fdc1d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.751472 containerd[1458]: 2024-11-12 22:53:50.706 [INFO][4399] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Namespace="kube-system" Pod="coredns-76f75df574-7msrw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--7msrw-eth0" Nov 12 22:53:50.751472 containerd[1458]: 2024-11-12 22:53:50.706 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf5d6fdc1d4 ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Namespace="kube-system" Pod="coredns-76f75df574-7msrw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--7msrw-eth0" Nov 12 22:53:50.751472 containerd[1458]: 2024-11-12 22:53:50.718 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Namespace="kube-system" Pod="coredns-76f75df574-7msrw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--7msrw-eth0" Nov 12 22:53:50.751472 containerd[1458]: 2024-11-12 22:53:50.719 [INFO][4399] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Namespace="kube-system" Pod="coredns-76f75df574-7msrw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--7msrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--7msrw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bf3a8091-a0f4-4679-8ae5-9dfbfe72d592", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae", Pod:"coredns-76f75df574-7msrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf5d6fdc1d4", MAC:"1a:a9:d7:ca:e7:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.751472 containerd[1458]: 2024-11-12 22:53:50.748 [INFO][4399] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae" Namespace="kube-system" Pod="coredns-76f75df574-7msrw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--7msrw-eth0" Nov 12 22:53:50.771735 containerd[1458]: time="2024-11-12T22:53:50.771689031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849946d688-74ts5,Uid:f399a5a4-5c83-4cb1-9e30-0bcffdf5c4a8,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393\"" Nov 12 22:53:50.785116 containerd[1458]: time="2024-11-12T22:53:50.785036049Z" level=info msg="CreateContainer within sandbox \"cd6762f1c30ad3f04d0b4da5b0a747656b20faf2817051616c2c15a0dd32b555\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdc8fe5aa105ebdff0d6bee99f6aaa2b65055995c5273b91da9a24fff1baf356\"" Nov 12 22:53:50.786441 containerd[1458]: time="2024-11-12T22:53:50.786263564Z" level=info msg="StartContainer for \"cdc8fe5aa105ebdff0d6bee99f6aaa2b65055995c5273b91da9a24fff1baf356\"" Nov 12 22:53:50.790209 systemd-networkd[1394]: cali8a84192e2b1: Link UP Nov 12 22:53:50.791404 systemd-networkd[1394]: cali8a84192e2b1: Gained carrier Nov 12 22:53:50.817382 containerd[1458]: time="2024-11-12T22:53:50.817116412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:50.817382 containerd[1458]: time="2024-11-12T22:53:50.817208469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:50.817382 containerd[1458]: time="2024-11-12T22:53:50.817241903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.817533 containerd[1458]: time="2024-11-12T22:53:50.817445714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.827417 systemd[1]: Started cri-containerd-cdc8fe5aa105ebdff0d6bee99f6aaa2b65055995c5273b91da9a24fff1baf356.scope - libcontainer container cdc8fe5aa105ebdff0d6bee99f6aaa2b65055995c5273b91da9a24fff1baf356. Nov 12 22:53:50.842385 sshd[4592]: Connection closed by 10.0.0.1 port 55792 Nov 12 22:53:50.842651 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:50.844655 systemd[1]: Started cri-containerd-4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae.scope - libcontainer container 4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae. Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:49.932 [INFO][4340] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.035 [INFO][4340] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0 calico-kube-controllers-565bddf9d5- calico-system e5c7450d-f473-4f6c-94c1-660f160a33e6 824 0 2024-11-12 22:53:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:565bddf9d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-565bddf9d5-2stsd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8a84192e2b1 [] []}} ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Namespace="calico-system" Pod="calico-kube-controllers-565bddf9d5-2stsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.035 [INFO][4340] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Namespace="calico-system" Pod="calico-kube-controllers-565bddf9d5-2stsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.219 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" HandleID="k8s-pod-network.d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Workload="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.302 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" HandleID="k8s-pod-network.d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Workload="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000363b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-565bddf9d5-2stsd", "timestamp":"2024-11-12 22:53:50.219875353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.303 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.697 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.697 [INFO][4410] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.699 [INFO][4410] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" host="localhost" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.704 [INFO][4410] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.715 [INFO][4410] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.719 [INFO][4410] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.724 [INFO][4410] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.724 [INFO][4410] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" host="localhost" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.727 [INFO][4410] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747 Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.752 [INFO][4410] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" host="localhost" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.780 [INFO][4410] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" host="localhost" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.780 [INFO][4410] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" host="localhost" Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.780 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:53:50.845083 containerd[1458]: 2024-11-12 22:53:50.781 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" HandleID="k8s-pod-network.d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Workload="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" Nov 12 22:53:50.845857 containerd[1458]: 2024-11-12 22:53:50.784 [INFO][4340] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Namespace="calico-system" Pod="calico-kube-controllers-565bddf9d5-2stsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0", GenerateName:"calico-kube-controllers-565bddf9d5-", Namespace:"calico-system", SelfLink:"", UID:"e5c7450d-f473-4f6c-94c1-660f160a33e6", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"565bddf9d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-565bddf9d5-2stsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a84192e2b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.845857 containerd[1458]: 2024-11-12 22:53:50.784 [INFO][4340] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Namespace="calico-system" Pod="calico-kube-controllers-565bddf9d5-2stsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" Nov 12 22:53:50.845857 containerd[1458]: 2024-11-12 22:53:50.784 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a84192e2b1 ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Namespace="calico-system" Pod="calico-kube-controllers-565bddf9d5-2stsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" Nov 12 22:53:50.845857 containerd[1458]: 2024-11-12 22:53:50.791 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Namespace="calico-system" Pod="calico-kube-controllers-565bddf9d5-2stsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" Nov 12 22:53:50.845857 containerd[1458]: 2024-11-12 22:53:50.792 [INFO][4340] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Namespace="calico-system" Pod="calico-kube-controllers-565bddf9d5-2stsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0", GenerateName:"calico-kube-controllers-565bddf9d5-", Namespace:"calico-system", SelfLink:"", UID:"e5c7450d-f473-4f6c-94c1-660f160a33e6", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"565bddf9d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747", Pod:"calico-kube-controllers-565bddf9d5-2stsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a84192e2b1", MAC:"86:a8:41:09:5a:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:50.845857 containerd[1458]: 2024-11-12 22:53:50.839 [INFO][4340] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747" Namespace="calico-system" Pod="calico-kube-controllers-565bddf9d5-2stsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565bddf9d5--2stsd-eth0" Nov 12 22:53:50.850452 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:55792.service: Deactivated successfully. Nov 12 22:53:50.853522 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 22:53:50.854931 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Nov 12 22:53:50.861593 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:55798.service - OpenSSH per-connection server daemon (10.0.0.1:55798). Nov 12 22:53:50.866285 systemd-logind[1437]: Removed session 15. Nov 12 22:53:50.867836 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:53:50.889141 containerd[1458]: time="2024-11-12T22:53:50.886255926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:50.889141 containerd[1458]: time="2024-11-12T22:53:50.886322784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:50.889141 containerd[1458]: time="2024-11-12T22:53:50.886341239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.897704 containerd[1458]: time="2024-11-12T22:53:50.887396113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:50.914373 systemd[1]: Started cri-containerd-d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747.scope - libcontainer container d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747. Nov 12 22:53:50.926401 sshd[4777]: Accepted publickey for core from 10.0.0.1 port 55798 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:53:50.928684 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:50.931442 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:53:50.934829 systemd-logind[1437]: New session 16 of user core. Nov 12 22:53:50.938203 containerd[1458]: time="2024-11-12T22:53:50.938044476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7msrw,Uid:bf3a8091-a0f4-4679-8ae5-9dfbfe72d592,Namespace:kube-system,Attempt:3,} returns sandbox id \"4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae\"" Nov 12 22:53:50.938278 containerd[1458]: time="2024-11-12T22:53:50.938261774Z" level=info msg="StartContainer for \"cdc8fe5aa105ebdff0d6bee99f6aaa2b65055995c5273b91da9a24fff1baf356\" returns successfully" Nov 12 22:53:50.938877 kubelet[2683]: E1112 22:53:50.938859 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:50.940486 containerd[1458]: time="2024-11-12T22:53:50.940456486Z" level=info msg="CreateContainer within sandbox \"4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:53:50.941266 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 22:53:50.960851 containerd[1458]: time="2024-11-12T22:53:50.960807667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565bddf9d5-2stsd,Uid:e5c7450d-f473-4f6c-94c1-660f160a33e6,Namespace:calico-system,Attempt:3,} returns sandbox id \"d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747\"" Nov 12 22:53:51.019320 systemd-networkd[1394]: cali588ecc089e6: Link UP Nov 12 22:53:51.019584 systemd-networkd[1394]: cali588ecc089e6: Gained carrier Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.274 [INFO][4476] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.300 [INFO][4476] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ghdrg-eth0 csi-node-driver- calico-system fd0f5998-8c5a-42b9-a810-034dc8c3ba70 662 0 2024-11-12 22:53:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ghdrg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali588ecc089e6 [] []}} ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Namespace="calico-system" Pod="csi-node-driver-ghdrg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ghdrg-" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.300 [INFO][4476] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Namespace="calico-system" Pod="csi-node-driver-ghdrg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ghdrg-eth0" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.342 [INFO][4491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" HandleID="k8s-pod-network.150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Workload="localhost-k8s-csi--node--driver--ghdrg-eth0" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.357 [INFO][4491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" HandleID="k8s-pod-network.150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Workload="localhost-k8s-csi--node--driver--ghdrg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ghdrg", "timestamp":"2024-11-12 22:53:50.342474564 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.357 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.781 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.781 [INFO][4491] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.786 [INFO][4491] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" host="localhost" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.861 [INFO][4491] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.916 [INFO][4491] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.958 [INFO][4491] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.978 [INFO][4491] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.978 [INFO][4491] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" host="localhost" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.984 [INFO][4491] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089 Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:50.997 [INFO][4491] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" host="localhost" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:51.010 [INFO][4491] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" host="localhost" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:51.010 [INFO][4491] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" host="localhost" Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:51.011 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:53:51.045534 containerd[1458]: 2024-11-12 22:53:51.011 [INFO][4491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" HandleID="k8s-pod-network.150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Workload="localhost-k8s-csi--node--driver--ghdrg-eth0" Nov 12 22:53:51.046587 containerd[1458]: 2024-11-12 22:53:51.014 [INFO][4476] cni-plugin/k8s.go 386: Populated endpoint ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Namespace="calico-system" Pod="csi-node-driver-ghdrg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ghdrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ghdrg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd0f5998-8c5a-42b9-a810-034dc8c3ba70", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ghdrg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali588ecc089e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:51.046587 containerd[1458]: 2024-11-12 22:53:51.014 [INFO][4476] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Namespace="calico-system" Pod="csi-node-driver-ghdrg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ghdrg-eth0" Nov 12 22:53:51.046587 containerd[1458]: 2024-11-12 22:53:51.014 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali588ecc089e6 ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Namespace="calico-system" Pod="csi-node-driver-ghdrg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ghdrg-eth0" Nov 12 22:53:51.046587 containerd[1458]: 2024-11-12 22:53:51.019 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Namespace="calico-system" Pod="csi-node-driver-ghdrg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ghdrg-eth0" Nov 12 22:53:51.046587 containerd[1458]: 2024-11-12 22:53:51.021 [INFO][4476] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Namespace="calico-system" Pod="csi-node-driver-ghdrg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ghdrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ghdrg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd0f5998-8c5a-42b9-a810-034dc8c3ba70", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 53, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089", Pod:"csi-node-driver-ghdrg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali588ecc089e6", MAC:"aa:86:cd:e2:3a:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:53:51.046587 containerd[1458]: 2024-11-12 22:53:51.041 [INFO][4476] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089" Namespace="calico-system" Pod="csi-node-driver-ghdrg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ghdrg-eth0" Nov 12 22:53:51.071512 containerd[1458]: time="2024-11-12T22:53:51.071463425Z" level=info msg="CreateContainer within sandbox \"4943e59c25985659839c5123e2648b29853c7df8e4d362ccf39218f1b78abcae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b12706dccc793ea55938058c1f0d83ad0b00665e106a7770a0f864ce43bcd11\"" Nov 12 22:53:51.072153 containerd[1458]: time="2024-11-12T22:53:51.072109806Z" level=info msg="StartContainer for \"8b12706dccc793ea55938058c1f0d83ad0b00665e106a7770a0f864ce43bcd11\"" Nov 12 22:53:51.088053 sshd[4826]: Connection closed by 10.0.0.1 port 55798 Nov 12 22:53:51.088532 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:51.092416 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:55798.service: Deactivated successfully. Nov 12 22:53:51.095113 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 22:53:51.096613 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Nov 12 22:53:51.098432 systemd-logind[1437]: Removed session 16. Nov 12 22:53:51.109353 systemd[1]: Started cri-containerd-8b12706dccc793ea55938058c1f0d83ad0b00665e106a7770a0f864ce43bcd11.scope - libcontainer container 8b12706dccc793ea55938058c1f0d83ad0b00665e106a7770a0f864ce43bcd11. Nov 12 22:53:51.116378 kubelet[2683]: E1112 22:53:51.115964 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:51.127873 kubelet[2683]: E1112 22:53:51.127847 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:51.165602 containerd[1458]: time="2024-11-12T22:53:51.164796970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:51.165602 containerd[1458]: time="2024-11-12T22:53:51.164934744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:51.165602 containerd[1458]: time="2024-11-12T22:53:51.164954412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:51.165894 containerd[1458]: time="2024-11-12T22:53:51.165826565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:51.183874 containerd[1458]: time="2024-11-12T22:53:51.183711311Z" level=info msg="StartContainer for \"8b12706dccc793ea55938058c1f0d83ad0b00665e106a7770a0f864ce43bcd11\" returns successfully" Nov 12 22:53:51.187722 systemd[1]: Started cri-containerd-150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089.scope - libcontainer container 150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089. Nov 12 22:53:51.201086 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:53:51.211083 containerd[1458]: time="2024-11-12T22:53:51.211002382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdrg,Uid:fd0f5998-8c5a-42b9-a810-034dc8c3ba70,Namespace:calico-system,Attempt:4,} returns sandbox id \"150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089\"" Nov 12 22:53:51.548163 kernel: bpftool[5100]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 22:53:51.663109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152563724.mount: Deactivated successfully. Nov 12 22:53:51.666768 systemd-networkd[1394]: cali0947f0b1d70: Gained IPv6LL Nov 12 22:53:51.776851 systemd-networkd[1394]: vxlan.calico: Link UP Nov 12 22:53:51.776862 systemd-networkd[1394]: vxlan.calico: Gained carrier Nov 12 22:53:51.852911 systemd-networkd[1394]: califf5d6fdc1d4: Gained IPv6LL Nov 12 22:53:51.853695 systemd-networkd[1394]: calie027af70a47: Gained IPv6LL Nov 12 22:53:51.916812 systemd-networkd[1394]: calidd6f3b67a01: Gained IPv6LL Nov 12 22:53:52.133421 kubelet[2683]: E1112 22:53:52.133313 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:52.134046 kubelet[2683]: E1112 22:53:52.133454 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:52.180201 kubelet[2683]: I1112 22:53:52.180150 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7msrw" podStartSLOduration=42.180079159 podStartE2EDuration="42.180079159s" podCreationTimestamp="2024-11-12 22:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:53:52.179669644 +0000 UTC m=+55.443393198" watchObservedRunningTime="2024-11-12 22:53:52.180079159 +0000 UTC m=+55.443802713" Nov 12 22:53:52.180396 kubelet[2683]: I1112 22:53:52.180273 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-56scx" podStartSLOduration=42.180255558 podStartE2EDuration="42.180255558s" podCreationTimestamp="2024-11-12 22:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:53:51.138425784 +0000 UTC m=+54.402149338" watchObservedRunningTime="2024-11-12 22:53:52.180255558 +0000 UTC m=+55.443979112" Nov 12 22:53:52.620291 systemd-networkd[1394]: cali8a84192e2b1: Gained IPv6LL Nov 12 22:53:52.941301 systemd-networkd[1394]: cali588ecc089e6: Gained IPv6LL Nov 12 22:53:53.135950 kubelet[2683]: E1112 22:53:53.135600 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:53.135950 kubelet[2683]: E1112 22:53:53.135849 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:53.196298 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Nov 12 22:53:53.811076 containerd[1458]: time="2024-11-12T22:53:53.811015081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:53.814967 containerd[1458]: time="2024-11-12T22:53:53.814915494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 22:53:53.836616 containerd[1458]: time="2024-11-12T22:53:53.836557342Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:53.860669 containerd[1458]: time="2024-11-12T22:53:53.860615598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:53.861502 containerd[1458]: time="2024-11-12T22:53:53.861466518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 3.22197875s" Nov 12 22:53:53.861502 containerd[1458]: time="2024-11-12T22:53:53.861495264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 22:53:53.862288 containerd[1458]: time="2024-11-12T22:53:53.862038184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 22:53:53.863802 containerd[1458]: time="2024-11-12T22:53:53.863764281Z" level=info msg="CreateContainer within sandbox \"27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 22:53:54.097513 containerd[1458]: time="2024-11-12T22:53:54.097371036Z" level=info msg="CreateContainer within sandbox \"27dbac0a79d8096dbfde73853beeb71297faa91c4b459376c00f88a13b39c2a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e88b113860d4f454b97ea697b2fb78bcd7ae2fa96ca417b07ef0ae13abbe39f4\"" Nov 12 22:53:54.098507 containerd[1458]: time="2024-11-12T22:53:54.098466684Z" level=info msg="StartContainer for \"e88b113860d4f454b97ea697b2fb78bcd7ae2fa96ca417b07ef0ae13abbe39f4\"" Nov 12 22:53:54.130275 systemd[1]: Started cri-containerd-e88b113860d4f454b97ea697b2fb78bcd7ae2fa96ca417b07ef0ae13abbe39f4.scope - libcontainer container e88b113860d4f454b97ea697b2fb78bcd7ae2fa96ca417b07ef0ae13abbe39f4. Nov 12 22:53:54.139153 kubelet[2683]: E1112 22:53:54.138809 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:54.139432 kubelet[2683]: E1112 22:53:54.139416 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:54.193326 containerd[1458]: time="2024-11-12T22:53:54.193275074Z" level=info msg="StartContainer for \"e88b113860d4f454b97ea697b2fb78bcd7ae2fa96ca417b07ef0ae13abbe39f4\" returns successfully" Nov 12 22:53:54.484851 containerd[1458]: time="2024-11-12T22:53:54.484794251Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:54.505582 containerd[1458]: time="2024-11-12T22:53:54.505499172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 22:53:54.508054 containerd[1458]: time="2024-11-12T22:53:54.508009439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 645.936166ms" Nov 12 22:53:54.508178 containerd[1458]: time="2024-11-12T22:53:54.508056198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 22:53:54.510172 containerd[1458]: time="2024-11-12T22:53:54.509530853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 22:53:54.510272 containerd[1458]: time="2024-11-12T22:53:54.510220964Z" level=info msg="CreateContainer within sandbox \"cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 22:53:54.808346 containerd[1458]: time="2024-11-12T22:53:54.808205632Z" level=info msg="CreateContainer within sandbox \"cc76021d39cd241d23b452ff06812fcc7d3cec5095a321970a97283d2f8b9393\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2f4cb1c9957b7b6a32b9246fcb2ae3cd27fd46ba67f4f2c819cde34b02dd74df\"" Nov 12 22:53:54.808958 containerd[1458]: time="2024-11-12T22:53:54.808928215Z" level=info msg="StartContainer for \"2f4cb1c9957b7b6a32b9246fcb2ae3cd27fd46ba67f4f2c819cde34b02dd74df\"" Nov 12 22:53:54.837387 systemd[1]: Started cri-containerd-2f4cb1c9957b7b6a32b9246fcb2ae3cd27fd46ba67f4f2c819cde34b02dd74df.scope - libcontainer container 2f4cb1c9957b7b6a32b9246fcb2ae3cd27fd46ba67f4f2c819cde34b02dd74df. Nov 12 22:53:54.933284 containerd[1458]: time="2024-11-12T22:53:54.933187154Z" level=info msg="StartContainer for \"2f4cb1c9957b7b6a32b9246fcb2ae3cd27fd46ba67f4f2c819cde34b02dd74df\" returns successfully" Nov 12 22:53:55.278327 kubelet[2683]: I1112 22:53:55.277658 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-849946d688-mcmw7" podStartSLOduration=34.054890567 podStartE2EDuration="37.277597963s" podCreationTimestamp="2024-11-12 22:53:18 +0000 UTC" firstStartedPulling="2024-11-12 22:53:50.639106217 +0000 UTC m=+53.902829771" lastFinishedPulling="2024-11-12 22:53:53.861813613 +0000 UTC m=+57.125537167" observedRunningTime="2024-11-12 22:53:55.231054848 +0000 UTC m=+58.494778402" watchObservedRunningTime="2024-11-12 22:53:55.277597963 +0000 UTC m=+58.541321517" Nov 12 22:53:56.105393 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:55812.service - OpenSSH per-connection server daemon (10.0.0.1:55812). Nov 12 22:53:56.146583 kubelet[2683]: I1112 22:53:56.146552 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:53:56.146694 kubelet[2683]: I1112 22:53:56.146582 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:53:56.160667 sshd[5303]: Accepted publickey for core from 10.0.0.1 port 55812 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:53:56.162330 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:56.166470 systemd-logind[1437]: New session 17 of user core. Nov 12 22:53:56.176292 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 22:53:56.301775 sshd[5305]: Connection closed by 10.0.0.1 port 55812 Nov 12 22:53:56.302094 sshd-session[5303]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:56.306354 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:55812.service: Deactivated successfully. Nov 12 22:53:56.308849 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 22:53:56.309713 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Nov 12 22:53:56.311209 systemd-logind[1437]: Removed session 17. Nov 12 22:53:56.805626 containerd[1458]: time="2024-11-12T22:53:56.805585060Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\"" Nov 12 22:53:56.806052 containerd[1458]: time="2024-11-12T22:53:56.805702244Z" level=info msg="TearDown network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" successfully" Nov 12 22:53:56.806052 containerd[1458]: time="2024-11-12T22:53:56.805713184Z" level=info msg="StopPodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" returns successfully" Nov 12 22:53:56.811767 containerd[1458]: time="2024-11-12T22:53:56.811742249Z" level=info msg="RemovePodSandbox for \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\"" Nov 12 22:53:56.826653 containerd[1458]: time="2024-11-12T22:53:56.826621995Z" level=info msg="Forcibly stopping sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\"" Nov 12 22:53:56.826752 containerd[1458]: time="2024-11-12T22:53:56.826703661Z" level=info msg="TearDown network for sandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" successfully" Nov 12 22:53:57.331774 containerd[1458]: time="2024-11-12T22:53:57.331716098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:57.332003 containerd[1458]: time="2024-11-12T22:53:57.331798746Z" level=info msg="RemovePodSandbox \"6afc3b8df99611ea449698acc759179517f87a3a18430471cacbfadd6242c90b\" returns successfully" Nov 12 22:53:57.333768 containerd[1458]: time="2024-11-12T22:53:57.333743605Z" level=info msg="StopPodSandbox for \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\"" Nov 12 22:53:57.342111 containerd[1458]: time="2024-11-12T22:53:57.333845530Z" level=info msg="TearDown network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" successfully" Nov 12 22:53:57.342219 containerd[1458]: time="2024-11-12T22:53:57.342110503Z" level=info msg="StopPodSandbox for \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" returns successfully" Nov 12 22:53:57.342539 containerd[1458]: time="2024-11-12T22:53:57.342496011Z" level=info msg="RemovePodSandbox for \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\"" Nov 12 22:53:57.342539 containerd[1458]: time="2024-11-12T22:53:57.342537290Z" level=info msg="Forcibly stopping sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\"" Nov 12 22:53:57.342707 containerd[1458]: time="2024-11-12T22:53:57.342628083Z" level=info msg="TearDown network for sandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" successfully" Nov 12 22:53:57.375918 containerd[1458]: time="2024-11-12T22:53:57.375478617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:57.375918 containerd[1458]: time="2024-11-12T22:53:57.375542880Z" level=info msg="RemovePodSandbox \"0de974aaa7bd21e11f54adc88ee18f85bdbcb8ea1f7e3474cb96ccfe1b0e664e\" returns successfully" Nov 12 22:53:57.376060 containerd[1458]: time="2024-11-12T22:53:57.375972031Z" level=info msg="StopPodSandbox for \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\"" Nov 12 22:53:57.376188 containerd[1458]: time="2024-11-12T22:53:57.376111848Z" level=info msg="TearDown network for sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\" successfully" Nov 12 22:53:57.376188 containerd[1458]: time="2024-11-12T22:53:57.376185529Z" level=info msg="StopPodSandbox for \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\" returns successfully" Nov 12 22:53:57.376458 containerd[1458]: time="2024-11-12T22:53:57.376436559Z" level=info msg="RemovePodSandbox for \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\"" Nov 12 22:53:57.376541 containerd[1458]: time="2024-11-12T22:53:57.376522293Z" level=info msg="Forcibly stopping sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\"" Nov 12 22:53:57.376666 containerd[1458]: time="2024-11-12T22:53:57.376620912Z" level=info msg="TearDown network for sandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\" successfully" Nov 12 22:53:57.411763 containerd[1458]: time="2024-11-12T22:53:57.411588063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:57.411763 containerd[1458]: time="2024-11-12T22:53:57.411667344Z" level=info msg="RemovePodSandbox \"73b275730446d98a65754363e528b81cd9a8e520084d2dfbd9e15d262227d139\" returns successfully" Nov 12 22:53:57.412296 containerd[1458]: time="2024-11-12T22:53:57.412267422Z" level=info msg="StopPodSandbox for \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\"" Nov 12 22:53:57.412408 containerd[1458]: time="2024-11-12T22:53:57.412386399Z" level=info msg="TearDown network for sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\" successfully" Nov 12 22:53:57.412408 containerd[1458]: time="2024-11-12T22:53:57.412404604Z" level=info msg="StopPodSandbox for \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\" returns successfully" Nov 12 22:53:57.413950 containerd[1458]: time="2024-11-12T22:53:57.412674581Z" level=info msg="RemovePodSandbox for \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\"" Nov 12 22:53:57.413950 containerd[1458]: time="2024-11-12T22:53:57.412700480Z" level=info msg="Forcibly stopping sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\"" Nov 12 22:53:57.413950 containerd[1458]: time="2024-11-12T22:53:57.412772537Z" level=info msg="TearDown network for sandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\" successfully" Nov 12 22:53:57.447025 containerd[1458]: time="2024-11-12T22:53:57.446966330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:57.447122 containerd[1458]: time="2024-11-12T22:53:57.447033319Z" level=info msg="RemovePodSandbox \"45c8f0a279e380804628fd095344f1b330fe6340f9f5f23a55a6a8c390fd6fb4\" returns successfully" Nov 12 22:53:57.447546 containerd[1458]: time="2024-11-12T22:53:57.447521001Z" level=info msg="StopPodSandbox for \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\"" Nov 12 22:53:57.447968 containerd[1458]: time="2024-11-12T22:53:57.447709621Z" level=info msg="TearDown network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" successfully" Nov 12 22:53:57.447968 containerd[1458]: time="2024-11-12T22:53:57.447723057Z" level=info msg="StopPodSandbox for \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" returns successfully" Nov 12 22:53:57.448101 containerd[1458]: time="2024-11-12T22:53:57.448066173Z" level=info msg="RemovePodSandbox for \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\"" Nov 12 22:53:57.448200 containerd[1458]: time="2024-11-12T22:53:57.448175542Z" level=info msg="Forcibly stopping sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\"" Nov 12 22:53:57.448312 containerd[1458]: time="2024-11-12T22:53:57.448264994Z" level=info msg="TearDown network for sandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" successfully" Nov 12 22:53:57.467773 containerd[1458]: time="2024-11-12T22:53:57.467746915Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:57.467955 containerd[1458]: time="2024-11-12T22:53:57.467936798Z" level=info msg="RemovePodSandbox \"d620bd28796939b3b94b8c0a9ac05f730f55ec972373a518805e3fdd4103cb70\" returns successfully" Nov 12 22:53:57.468465 containerd[1458]: time="2024-11-12T22:53:57.468439900Z" level=info msg="StopPodSandbox for \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\"" Nov 12 22:53:57.468572 containerd[1458]: time="2024-11-12T22:53:57.468536655Z" level=info msg="TearDown network for sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\" successfully" Nov 12 22:53:57.468572 containerd[1458]: time="2024-11-12T22:53:57.468553618Z" level=info msg="StopPodSandbox for \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\" returns successfully" Nov 12 22:53:57.469217 containerd[1458]: time="2024-11-12T22:53:57.468818875Z" level=info msg="RemovePodSandbox for \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\"" Nov 12 22:53:57.469217 containerd[1458]: time="2024-11-12T22:53:57.468846327Z" level=info msg="Forcibly stopping sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\"" Nov 12 22:53:57.469217 containerd[1458]: time="2024-11-12T22:53:57.468923765Z" level=info msg="TearDown network for sandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\" successfully" Nov 12 22:53:57.492189 containerd[1458]: time="2024-11-12T22:53:57.492141190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:57.492303 containerd[1458]: time="2024-11-12T22:53:57.492212006Z" level=info msg="RemovePodSandbox \"4221d48f939b691373849097e7ac831b607e9fa5707c3a9d42cf564830cea9ea\" returns successfully" Nov 12 22:53:57.492897 containerd[1458]: time="2024-11-12T22:53:57.492677084Z" level=info msg="StopPodSandbox for \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\"" Nov 12 22:53:57.492897 containerd[1458]: time="2024-11-12T22:53:57.492804197Z" level=info msg="TearDown network for sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\" successfully" Nov 12 22:53:57.492897 containerd[1458]: time="2024-11-12T22:53:57.492816492Z" level=info msg="StopPodSandbox for \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\" returns successfully" Nov 12 22:53:57.493340 containerd[1458]: time="2024-11-12T22:53:57.493316407Z" level=info msg="RemovePodSandbox for \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\"" Nov 12 22:53:57.493340 containerd[1458]: time="2024-11-12T22:53:57.493339522Z" level=info msg="Forcibly stopping sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\"" Nov 12 22:53:57.493674 containerd[1458]: time="2024-11-12T22:53:57.493405728Z" level=info msg="TearDown network for sandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\" successfully" Nov 12 22:53:57.531299 containerd[1458]: time="2024-11-12T22:53:57.531234571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:57.531299 containerd[1458]: time="2024-11-12T22:53:57.531306498Z" level=info msg="RemovePodSandbox \"0d3f4b8d837b29c41fdd76f35b4c38ac7a82db33a7fa21c738988cee9273ec5f\" returns successfully" Nov 12 22:53:57.532265 containerd[1458]: time="2024-11-12T22:53:57.532215105Z" level=info msg="StopPodSandbox for \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\"" Nov 12 22:53:57.532370 containerd[1458]: time="2024-11-12T22:53:57.532351556Z" level=info msg="TearDown network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" successfully" Nov 12 22:53:57.532407 containerd[1458]: time="2024-11-12T22:53:57.532366235Z" level=info msg="StopPodSandbox for \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" returns successfully" Nov 12 22:53:57.533027 containerd[1458]: time="2024-11-12T22:53:57.532699231Z" level=info msg="RemovePodSandbox for \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\"" Nov 12 22:53:57.533027 containerd[1458]: time="2024-11-12T22:53:57.532730561Z" level=info msg="Forcibly stopping sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\"" Nov 12 22:53:57.533027 containerd[1458]: time="2024-11-12T22:53:57.532807438Z" level=info msg="TearDown network for sandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" successfully" Nov 12 22:53:57.555527 containerd[1458]: time="2024-11-12T22:53:57.555487505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:57.555591 containerd[1458]: time="2024-11-12T22:53:57.555557940Z" level=info msg="RemovePodSandbox \"a9342d85c706ebe00e6b18aaf9c0d0e18afae59c0b1c361196b539fd8f83f25e\" returns successfully" Nov 12 22:53:57.555884 containerd[1458]: time="2024-11-12T22:53:57.555855159Z" level=info msg="StopPodSandbox for \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\"" Nov 12 22:53:57.555978 containerd[1458]: time="2024-11-12T22:53:57.555957053Z" level=info msg="TearDown network for sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\" successfully" Nov 12 22:53:57.555978 containerd[1458]: time="2024-11-12T22:53:57.555970649Z" level=info msg="StopPodSandbox for \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\" returns successfully" Nov 12 22:53:57.557049 containerd[1458]: time="2024-11-12T22:53:57.556366195Z" level=info msg="RemovePodSandbox for \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\"" Nov 12 22:53:57.557049 containerd[1458]: time="2024-11-12T22:53:57.556386494Z" level=info msg="Forcibly stopping sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\"" Nov 12 22:53:57.557049 containerd[1458]: time="2024-11-12T22:53:57.556447972Z" level=info msg="TearDown network for sandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\" successfully" Nov 12 22:53:57.876425 containerd[1458]: time="2024-11-12T22:53:57.876360282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:57.876894 containerd[1458]: time="2024-11-12T22:53:57.876436929Z" level=info msg="RemovePodSandbox \"ad7cc5207d27eaae1c976c0890b0846de9ccfc3b0e9ae4e0ffdfdfdd7fb56f8b\" returns successfully" Nov 12 22:53:57.876894 containerd[1458]: time="2024-11-12T22:53:57.876867212Z" level=info msg="StopPodSandbox for \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\"" Nov 12 22:53:57.877034 containerd[1458]: time="2024-11-12T22:53:57.877015024Z" level=info msg="TearDown network for sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\" successfully" Nov 12 22:53:57.877060 containerd[1458]: time="2024-11-12T22:53:57.877034562Z" level=info msg="StopPodSandbox for \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\" returns successfully" Nov 12 22:53:57.877324 containerd[1458]: time="2024-11-12T22:53:57.877306942Z" level=info msg="RemovePodSandbox for \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\"" Nov 12 22:53:57.877374 containerd[1458]: time="2024-11-12T22:53:57.877329364Z" level=info msg="Forcibly stopping sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\"" Nov 12 22:53:57.877428 containerd[1458]: time="2024-11-12T22:53:57.877396614Z" level=info msg="TearDown network for sandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\" successfully" Nov 12 22:53:57.980141 containerd[1458]: time="2024-11-12T22:53:57.980062162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:58.000159 containerd[1458]: time="2024-11-12T22:53:58.000108713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.000230 containerd[1458]: time="2024-11-12T22:53:58.000182284Z" level=info msg="RemovePodSandbox \"c77d4791de7798020e5c958711f48b96984fe34c6fe0e1b337c2cb09d9700fa1\" returns successfully" Nov 12 22:53:58.000562 containerd[1458]: time="2024-11-12T22:53:58.000546440Z" level=info msg="StopPodSandbox for \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\"" Nov 12 22:53:58.000666 containerd[1458]: time="2024-11-12T22:53:58.000635381Z" level=info msg="TearDown network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" successfully" Nov 12 22:53:58.000666 containerd[1458]: time="2024-11-12T22:53:58.000648596Z" level=info msg="StopPodSandbox for \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" returns successfully" Nov 12 22:53:58.001240 containerd[1458]: time="2024-11-12T22:53:58.000852466Z" level=info msg="RemovePodSandbox for \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\"" Nov 12 22:53:58.001240 containerd[1458]: time="2024-11-12T22:53:58.000874387Z" level=info msg="Forcibly stopping sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\"" Nov 12 22:53:58.001240 containerd[1458]: time="2024-11-12T22:53:58.000961635Z" level=info msg="TearDown network for sandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" successfully" Nov 12 22:53:58.012315 containerd[1458]: time="2024-11-12T22:53:58.012236113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 22:53:58.055945 containerd[1458]: time="2024-11-12T22:53:58.055893751Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:58.068142 containerd[1458]: time="2024-11-12T22:53:58.068061466Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.068304 containerd[1458]: time="2024-11-12T22:53:58.068159032Z" level=info msg="RemovePodSandbox \"42ecbf1fc19b4e6ff5883ae91b402a8bdd70c3241e984458e281660c39ce8536\" returns successfully" Nov 12 22:53:58.068681 containerd[1458]: time="2024-11-12T22:53:58.068628119Z" level=info msg="StopPodSandbox for \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\"" Nov 12 22:53:58.068819 containerd[1458]: time="2024-11-12T22:53:58.068760593Z" level=info msg="TearDown network for sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\" successfully" Nov 12 22:53:58.068819 containerd[1458]: time="2024-11-12T22:53:58.068776402Z" level=info msg="StopPodSandbox for \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\" returns successfully" Nov 12 22:53:58.069115 containerd[1458]: time="2024-11-12T22:53:58.069079732Z" level=info msg="RemovePodSandbox for \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\"" Nov 12 22:53:58.069115 containerd[1458]: time="2024-11-12T22:53:58.069113968Z" level=info msg="Forcibly stopping sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\"" Nov 12 22:53:58.069271 containerd[1458]: time="2024-11-12T22:53:58.069199792Z" level=info msg="TearDown network for sandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\" successfully" Nov 12 22:53:58.080525 containerd[1458]: time="2024-11-12T22:53:58.080484019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:58.081248 containerd[1458]: time="2024-11-12T22:53:58.081214565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 3.571036703s" Nov 12 22:53:58.081289 containerd[1458]: time="2024-11-12T22:53:58.081248900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 22:53:58.082143 containerd[1458]: time="2024-11-12T22:53:58.081720492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 22:53:58.088458 containerd[1458]: time="2024-11-12T22:53:58.088425035Z" level=info msg="CreateContainer within sandbox \"d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 22:53:58.135526 containerd[1458]: time="2024-11-12T22:53:58.135395636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.135526 containerd[1458]: time="2024-11-12T22:53:58.135478524Z" level=info msg="RemovePodSandbox \"b2b69b87712a8ca2c33b8340d3c78fbf70ebef08b58b5f1145d98e66c21b9cb8\" returns successfully" Nov 12 22:53:58.136077 containerd[1458]: time="2024-11-12T22:53:58.135935729Z" level=info msg="StopPodSandbox for \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\"" Nov 12 22:53:58.136077 containerd[1458]: time="2024-11-12T22:53:58.136050348Z" level=info msg="TearDown network for sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\" successfully" Nov 12 22:53:58.136077 containerd[1458]: time="2024-11-12T22:53:58.136063562Z" level=info msg="StopPodSandbox for \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\" returns successfully" Nov 12 22:53:58.136522 containerd[1458]: time="2024-11-12T22:53:58.136377412Z" level=info msg="RemovePodSandbox for \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\"" Nov 12 22:53:58.136522 containerd[1458]: time="2024-11-12T22:53:58.136440443Z" level=info msg="Forcibly stopping sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\"" Nov 12 22:53:58.136604 containerd[1458]: time="2024-11-12T22:53:58.136529573Z" level=info msg="TearDown network for sandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\" successfully" Nov 12 22:53:58.168044 containerd[1458]: time="2024-11-12T22:53:58.167987205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.168044 containerd[1458]: time="2024-11-12T22:53:58.168046349Z" level=info msg="RemovePodSandbox \"855aa39f70265833bdb0c25e01b966dd284255a262d6a93333c972e7a9a32c35\" returns successfully" Nov 12 22:53:58.168418 containerd[1458]: time="2024-11-12T22:53:58.168389885Z" level=info msg="StopPodSandbox for \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\"" Nov 12 22:53:58.168560 containerd[1458]: time="2024-11-12T22:53:58.168490527Z" level=info msg="TearDown network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" successfully" Nov 12 22:53:58.168560 containerd[1458]: time="2024-11-12T22:53:58.168540202Z" level=info msg="StopPodSandbox for \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" returns successfully" Nov 12 22:53:58.168796 containerd[1458]: time="2024-11-12T22:53:58.168767456Z" level=info msg="RemovePodSandbox for \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\"" Nov 12 22:53:58.168796 containerd[1458]: time="2024-11-12T22:53:58.168790461Z" level=info msg="Forcibly stopping sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\"" Nov 12 22:53:58.168940 containerd[1458]: time="2024-11-12T22:53:58.168855645Z" level=info msg="TearDown network for sandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" successfully" Nov 12 22:53:58.258989 containerd[1458]: time="2024-11-12T22:53:58.258928893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.259178 containerd[1458]: time="2024-11-12T22:53:58.259016582Z" level=info msg="RemovePodSandbox \"1a2fa761edf328ed1564c4ddb80fa3f8c509ae887b5a47864a205e521383881d\" returns successfully" Nov 12 22:53:58.259701 containerd[1458]: time="2024-11-12T22:53:58.259681883Z" level=info msg="StopPodSandbox for \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\"" Nov 12 22:53:58.259909 containerd[1458]: time="2024-11-12T22:53:58.259841779Z" level=info msg="TearDown network for sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\" successfully" Nov 12 22:53:58.259909 containerd[1458]: time="2024-11-12T22:53:58.259855364Z" level=info msg="StopPodSandbox for \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\" returns successfully" Nov 12 22:53:58.260474 containerd[1458]: time="2024-11-12T22:53:58.260433550Z" level=info msg="RemovePodSandbox for \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\"" Nov 12 22:53:58.260537 containerd[1458]: time="2024-11-12T22:53:58.260477223Z" level=info msg="Forcibly stopping sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\"" Nov 12 22:53:58.260624 containerd[1458]: time="2024-11-12T22:53:58.260576563Z" level=info msg="TearDown network for sandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\" successfully" Nov 12 22:53:58.265822 containerd[1458]: time="2024-11-12T22:53:58.265675768Z" level=info msg="CreateContainer within sandbox \"d4f14c4541dc83ae18666b5828891053f52bad685fc16d48fa980c2107140747\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3a21070a2df283c7655bf61e86934e694bb5f74518483a51e7b44ddc84bcd0f2\"" Nov 12 22:53:58.266441 containerd[1458]: time="2024-11-12T22:53:58.266367681Z" level=info msg="StartContainer for \"3a21070a2df283c7655bf61e86934e694bb5f74518483a51e7b44ddc84bcd0f2\"" Nov 12 22:53:58.290112 containerd[1458]: time="2024-11-12T22:53:58.290048330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.290247 containerd[1458]: time="2024-11-12T22:53:58.290207063Z" level=info msg="RemovePodSandbox \"7be33271cbe50a6ac62e9f51793222b87ebb391b1a6575374ae3f85b0af9f9a0\" returns successfully" Nov 12 22:53:58.290622 containerd[1458]: time="2024-11-12T22:53:58.290594212Z" level=info msg="StopPodSandbox for \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\"" Nov 12 22:53:58.290744 containerd[1458]: time="2024-11-12T22:53:58.290724191Z" level=info msg="TearDown network for sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\" successfully" Nov 12 22:53:58.290744 containerd[1458]: time="2024-11-12T22:53:58.290738239Z" level=info msg="StopPodSandbox for \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\" returns successfully" Nov 12 22:53:58.291142 containerd[1458]: time="2024-11-12T22:53:58.291073980Z" level=info msg="RemovePodSandbox for \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\"" Nov 12 22:53:58.291178 containerd[1458]: time="2024-11-12T22:53:58.291148492Z" level=info msg="Forcibly stopping sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\"" Nov 12 22:53:58.291299 containerd[1458]: time="2024-11-12T22:53:58.291249045Z" level=info msg="TearDown network for sandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\" successfully" Nov 12 22:53:58.296328 systemd[1]: Started cri-containerd-3a21070a2df283c7655bf61e86934e694bb5f74518483a51e7b44ddc84bcd0f2.scope - libcontainer container 3a21070a2df283c7655bf61e86934e694bb5f74518483a51e7b44ddc84bcd0f2. Nov 12 22:53:58.310219 containerd[1458]: time="2024-11-12T22:53:58.310118739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.310366 containerd[1458]: time="2024-11-12T22:53:58.310238157Z" level=info msg="RemovePodSandbox \"55cd9c6de148aa136c40993e58d0ae6451c37f09be577d331d4c2eabdb3d1434\" returns successfully" Nov 12 22:53:58.310868 containerd[1458]: time="2024-11-12T22:53:58.310810781Z" level=info msg="StopPodSandbox for \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\"" Nov 12 22:53:58.311012 containerd[1458]: time="2024-11-12T22:53:58.310985896Z" level=info msg="TearDown network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" successfully" Nov 12 22:53:58.311012 containerd[1458]: time="2024-11-12T22:53:58.311004331Z" level=info msg="StopPodSandbox for \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" returns successfully" Nov 12 22:53:58.311296 containerd[1458]: time="2024-11-12T22:53:58.311265951Z" level=info msg="RemovePodSandbox for \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\"" Nov 12 22:53:58.311296 containerd[1458]: time="2024-11-12T22:53:58.311289986Z" level=info msg="Forcibly stopping sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\"" Nov 12 22:53:58.311413 containerd[1458]: time="2024-11-12T22:53:58.311373286Z" level=info msg="TearDown network for sandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" successfully" Nov 12 22:53:58.342513 containerd[1458]: time="2024-11-12T22:53:58.342463035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.368754 containerd[1458]: time="2024-11-12T22:53:58.368645034Z" level=info msg="RemovePodSandbox \"2b7e5b2c80460f8851cb10fd32926d57c8f0e88f58c9014cba1ce52fa0c67c49\" returns successfully" Nov 12 22:53:58.368754 containerd[1458]: time="2024-11-12T22:53:58.368730037Z" level=info msg="StartContainer for \"3a21070a2df283c7655bf61e86934e694bb5f74518483a51e7b44ddc84bcd0f2\" returns successfully" Nov 12 22:53:58.369465 containerd[1458]: time="2024-11-12T22:53:58.369445635Z" level=info msg="StopPodSandbox for \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\"" Nov 12 22:53:58.369589 containerd[1458]: time="2024-11-12T22:53:58.369544483Z" level=info msg="TearDown network for sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\" successfully" Nov 12 22:53:58.369947 containerd[1458]: time="2024-11-12T22:53:58.369919800Z" level=info msg="StopPodSandbox for \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\" returns successfully" Nov 12 22:53:58.370305 containerd[1458]: time="2024-11-12T22:53:58.370274058Z" level=info msg="RemovePodSandbox for \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\"" Nov 12 22:53:58.370354 containerd[1458]: time="2024-11-12T22:53:58.370305237Z" level=info msg="Forcibly stopping sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\"" Nov 12 22:53:58.370913 containerd[1458]: time="2024-11-12T22:53:58.370382776Z" level=info msg="TearDown network for sandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\" successfully" Nov 12 22:53:58.398229 containerd[1458]: time="2024-11-12T22:53:58.398035926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.398229 containerd[1458]: time="2024-11-12T22:53:58.398157408Z" level=info msg="RemovePodSandbox \"4a75c4bd0b9ede3c407f28cbdf9d0842af19b447dea31c13a826047f166f42d1\" returns successfully" Nov 12 22:53:58.398918 containerd[1458]: time="2024-11-12T22:53:58.398853559Z" level=info msg="StopPodSandbox for \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\"" Nov 12 22:53:58.399171 containerd[1458]: time="2024-11-12T22:53:58.399029626Z" level=info msg="TearDown network for sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\" successfully" Nov 12 22:53:58.399171 containerd[1458]: time="2024-11-12T22:53:58.399051517Z" level=info msg="StopPodSandbox for \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\" returns successfully" Nov 12 22:53:58.399815 containerd[1458]: time="2024-11-12T22:53:58.399462072Z" level=info msg="RemovePodSandbox for \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\"" Nov 12 22:53:58.399815 containerd[1458]: time="2024-11-12T22:53:58.399495756Z" level=info msg="Forcibly stopping sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\"" Nov 12 22:53:58.399815 containerd[1458]: time="2024-11-12T22:53:58.399600377Z" level=info msg="TearDown network for sandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\" successfully" Nov 12 22:53:58.434011 containerd[1458]: time="2024-11-12T22:53:58.433942712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:53:58.434262 containerd[1458]: time="2024-11-12T22:53:58.434036711Z" level=info msg="RemovePodSandbox \"99c79d86d07117d2f96e33604a556b27bd3f4123e8842275972b741cc2876307\" returns successfully" Nov 12 22:53:59.401632 kubelet[2683]: I1112 22:53:59.401204 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-849946d688-74ts5" podStartSLOduration=37.66556541 podStartE2EDuration="41.401150014s" podCreationTimestamp="2024-11-12 22:53:18 +0000 UTC" firstStartedPulling="2024-11-12 22:53:50.772788641 +0000 UTC m=+54.036512185" lastFinishedPulling="2024-11-12 22:53:54.508373235 +0000 UTC m=+57.772096789" observedRunningTime="2024-11-12 22:53:55.280557908 +0000 UTC m=+58.544281463" watchObservedRunningTime="2024-11-12 22:53:59.401150014 +0000 UTC m=+62.664873588" Nov 12 22:53:59.499272 kubelet[2683]: I1112 22:53:59.498778 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-565bddf9d5-2stsd" podStartSLOduration=33.379206084 podStartE2EDuration="40.498740274s" podCreationTimestamp="2024-11-12 22:53:19 +0000 UTC" firstStartedPulling="2024-11-12 22:53:50.96201289 +0000 UTC m=+54.225736454" lastFinishedPulling="2024-11-12 22:53:58.0815471 +0000 UTC m=+61.345270644" observedRunningTime="2024-11-12 22:53:59.40152477 +0000 UTC m=+62.665248324" watchObservedRunningTime="2024-11-12 22:53:59.498740274 +0000 UTC m=+62.762463828" Nov 12 22:54:00.872050 containerd[1458]: time="2024-11-12T22:54:00.872002391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:54:00.906292 containerd[1458]: time="2024-11-12T22:54:00.906241639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 22:54:00.934347 containerd[1458]: time="2024-11-12T22:54:00.934312972Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:54:00.963171 containerd[1458]: time="2024-11-12T22:54:00.963084780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:54:00.963656 containerd[1458]: time="2024-11-12T22:54:00.963625513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 2.881874844s" Nov 12 22:54:00.963725 containerd[1458]: time="2024-11-12T22:54:00.963658445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 22:54:00.965236 containerd[1458]: time="2024-11-12T22:54:00.965201843Z" level=info msg="CreateContainer within sandbox \"150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 22:54:01.279768 containerd[1458]: time="2024-11-12T22:54:01.279730216Z" level=info msg="CreateContainer within sandbox \"150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f89fedc6d8783d103fbf4018d04cd223eb35a131d1a86adf48ea2a4a9881b6ee\"" Nov 12 22:54:01.280206 containerd[1458]: time="2024-11-12T22:54:01.280183060Z" level=info msg="StartContainer for \"f89fedc6d8783d103fbf4018d04cd223eb35a131d1a86adf48ea2a4a9881b6ee\"" Nov 12 22:54:01.321343 systemd[1]: Started cri-containerd-f89fedc6d8783d103fbf4018d04cd223eb35a131d1a86adf48ea2a4a9881b6ee.scope - libcontainer container f89fedc6d8783d103fbf4018d04cd223eb35a131d1a86adf48ea2a4a9881b6ee. Nov 12 22:54:01.323623 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:55998.service - OpenSSH per-connection server daemon (10.0.0.1:55998). Nov 12 22:54:01.402272 sshd[5418]: Accepted publickey for core from 10.0.0.1 port 55998 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:01.421282 sshd-session[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:01.425521 systemd-logind[1437]: New session 18 of user core. Nov 12 22:54:01.434338 containerd[1458]: time="2024-11-12T22:54:01.434304014Z" level=info msg="StartContainer for \"f89fedc6d8783d103fbf4018d04cd223eb35a131d1a86adf48ea2a4a9881b6ee\" returns successfully" Nov 12 22:54:01.435262 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 22:54:01.435880 containerd[1458]: time="2024-11-12T22:54:01.435679899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 22:54:01.570949 sshd[5437]: Connection closed by 10.0.0.1 port 55998 Nov 12 22:54:01.571267 sshd-session[5418]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:01.576696 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:55998.service: Deactivated successfully. Nov 12 22:54:01.578497 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 22:54:01.579091 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Nov 12 22:54:01.580021 systemd-logind[1437]: Removed session 18. Nov 12 22:54:04.379699 containerd[1458]: time="2024-11-12T22:54:04.379647471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:54:04.389343 containerd[1458]: time="2024-11-12T22:54:04.389300934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 22:54:04.403150 containerd[1458]: time="2024-11-12T22:54:04.403079751Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:54:04.425278 containerd[1458]: time="2024-11-12T22:54:04.425244644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:54:04.426093 containerd[1458]: time="2024-11-12T22:54:04.426062853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 2.990351203s" Nov 12 22:54:04.426169 containerd[1458]: time="2024-11-12T22:54:04.426092500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 22:54:04.427663 containerd[1458]: time="2024-11-12T22:54:04.427640602Z" level=info msg="CreateContainer within sandbox \"150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 22:54:04.707460 containerd[1458]: time="2024-11-12T22:54:04.707415051Z" level=info msg="CreateContainer within sandbox \"150b17eb6064e9f8c4b9df074056ed3ce609b62f3002d5e351b359d590952089\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9d318a8fc3611e2661dd3cca87e30e385b75e8b7b793ed33d566250e325ce3ee\"" Nov 12 22:54:04.707941 containerd[1458]: time="2024-11-12T22:54:04.707917278Z" level=info msg="StartContainer for \"9d318a8fc3611e2661dd3cca87e30e385b75e8b7b793ed33d566250e325ce3ee\"" Nov 12 22:54:04.745282 systemd[1]: Started cri-containerd-9d318a8fc3611e2661dd3cca87e30e385b75e8b7b793ed33d566250e325ce3ee.scope - libcontainer container 9d318a8fc3611e2661dd3cca87e30e385b75e8b7b793ed33d566250e325ce3ee. Nov 12 22:54:04.883075 kubelet[2683]: I1112 22:54:04.883042 2683 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 22:54:04.893669 kubelet[2683]: I1112 22:54:04.893611 2683 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 22:54:04.899254 containerd[1458]: time="2024-11-12T22:54:04.899199364Z" level=info msg="StartContainer for \"9d318a8fc3611e2661dd3cca87e30e385b75e8b7b793ed33d566250e325ce3ee\" returns successfully" Nov 12 22:54:06.586732 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:56002.service - OpenSSH per-connection server daemon (10.0.0.1:56002). Nov 12 22:54:06.639534 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 56002 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:06.641626 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:06.645606 systemd-logind[1437]: New session 19 of user core. Nov 12 22:54:06.656273 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 22:54:06.794311 sshd[5502]: Connection closed by 10.0.0.1 port 56002 Nov 12 22:54:06.794661 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:06.798259 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:56002.service: Deactivated successfully. Nov 12 22:54:06.800122 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 22:54:06.800804 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Nov 12 22:54:06.801766 systemd-logind[1437]: Removed session 19. Nov 12 22:54:11.809770 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:54806.service - OpenSSH per-connection server daemon (10.0.0.1:54806). Nov 12 22:54:11.850849 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 54806 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:11.852282 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:11.855970 systemd-logind[1437]: New session 20 of user core. Nov 12 22:54:11.865277 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 22:54:11.986790 sshd[5540]: Connection closed by 10.0.0.1 port 54806 Nov 12 22:54:11.987014 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:11.993846 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:54806.service: Deactivated successfully. Nov 12 22:54:11.995491 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 22:54:11.997064 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Nov 12 22:54:12.003621 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:54820.service - OpenSSH per-connection server daemon (10.0.0.1:54820). Nov 12 22:54:12.004441 systemd-logind[1437]: Removed session 20. Nov 12 22:54:12.041338 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 54820 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:12.042730 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:12.046280 systemd-logind[1437]: New session 21 of user core. Nov 12 22:54:12.055262 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 22:54:12.538517 sshd[5554]: Connection closed by 10.0.0.1 port 54820 Nov 12 22:54:12.538950 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:12.550213 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:54820.service: Deactivated successfully. Nov 12 22:54:12.551860 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 22:54:12.553452 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Nov 12 22:54:12.566415 systemd[1]: Started sshd@21-10.0.0.135:22-10.0.0.1:54832.service - OpenSSH per-connection server daemon (10.0.0.1:54832). Nov 12 22:54:12.567378 systemd-logind[1437]: Removed session 21. Nov 12 22:54:12.598954 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 54832 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:12.600436 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:12.605682 systemd-logind[1437]: New session 22 of user core. Nov 12 22:54:12.617264 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 22:54:14.904795 sshd[5575]: Connection closed by 10.0.0.1 port 54832 Nov 12 22:54:14.905526 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:14.914785 systemd[1]: sshd@21-10.0.0.135:22-10.0.0.1:54832.service: Deactivated successfully. Nov 12 22:54:14.917504 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 22:54:14.919563 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Nov 12 22:54:14.931500 systemd[1]: Started sshd@22-10.0.0.135:22-10.0.0.1:54840.service - OpenSSH per-connection server daemon (10.0.0.1:54840). Nov 12 22:54:14.932639 systemd-logind[1437]: Removed session 22. Nov 12 22:54:14.969506 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 54840 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:14.971274 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:14.975965 systemd-logind[1437]: New session 23 of user core. Nov 12 22:54:14.984246 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 22:54:15.356308 sshd[5616]: Connection closed by 10.0.0.1 port 54840 Nov 12 22:54:15.356735 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:15.364284 systemd[1]: sshd@22-10.0.0.135:22-10.0.0.1:54840.service: Deactivated successfully. Nov 12 22:54:15.366262 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 22:54:15.367962 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Nov 12 22:54:15.376454 systemd[1]: Started sshd@23-10.0.0.135:22-10.0.0.1:54854.service - OpenSSH per-connection server daemon (10.0.0.1:54854). Nov 12 22:54:15.377512 systemd-logind[1437]: Removed session 23. Nov 12 22:54:15.409652 sshd[5627]: Accepted publickey for core from 10.0.0.1 port 54854 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:15.411246 sshd-session[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:15.415515 systemd-logind[1437]: New session 24 of user core. Nov 12 22:54:15.423249 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 22:54:15.576241 sshd[5629]: Connection closed by 10.0.0.1 port 54854 Nov 12 22:54:15.576644 sshd-session[5627]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:15.580864 systemd[1]: sshd@23-10.0.0.135:22-10.0.0.1:54854.service: Deactivated successfully. Nov 12 22:54:15.582949 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 22:54:15.583587 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Nov 12 22:54:15.584827 systemd-logind[1437]: Removed session 24. Nov 12 22:54:18.817833 kubelet[2683]: E1112 22:54:18.817770 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:20.587776 systemd[1]: Started sshd@24-10.0.0.135:22-10.0.0.1:34472.service - OpenSSH per-connection server daemon (10.0.0.1:34472). Nov 12 22:54:20.625488 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 34472 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:20.627002 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:20.630666 systemd-logind[1437]: New session 25 of user core. Nov 12 22:54:20.637271 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 22:54:20.745447 sshd[5644]: Connection closed by 10.0.0.1 port 34472 Nov 12 22:54:20.745853 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:20.750230 systemd[1]: sshd@24-10.0.0.135:22-10.0.0.1:34472.service: Deactivated successfully. Nov 12 22:54:20.752549 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 22:54:20.753325 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Nov 12 22:54:20.754502 systemd-logind[1437]: Removed session 25. Nov 12 22:54:21.931796 kubelet[2683]: E1112 22:54:21.931764 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:21.950947 kubelet[2683]: I1112 22:54:21.950757 2683 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-ghdrg" podStartSLOduration=49.736305587 podStartE2EDuration="1m2.950720377s" podCreationTimestamp="2024-11-12 22:53:19 +0000 UTC" firstStartedPulling="2024-11-12 22:53:51.211984575 +0000 UTC m=+54.475708130" lastFinishedPulling="2024-11-12 22:54:04.426399366 +0000 UTC m=+67.690122920" observedRunningTime="2024-11-12 22:54:05.202542286 +0000 UTC m=+68.466265851" watchObservedRunningTime="2024-11-12 22:54:21.950720377 +0000 UTC m=+85.214443931" Nov 12 22:54:25.619791 kubelet[2683]: I1112 22:54:25.619733 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:54:25.757331 systemd[1]: Started sshd@25-10.0.0.135:22-10.0.0.1:34478.service - OpenSSH per-connection server daemon (10.0.0.1:34478). Nov 12 22:54:25.793668 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 34478 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:25.795392 sshd-session[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:25.799252 systemd-logind[1437]: New session 26 of user core. Nov 12 22:54:25.807248 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 22:54:26.028072 sshd[5686]: Connection closed by 10.0.0.1 port 34478 Nov 12 22:54:26.029842 sshd-session[5684]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:26.033650 systemd[1]: sshd@25-10.0.0.135:22-10.0.0.1:34478.service: Deactivated successfully. Nov 12 22:54:26.035734 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 22:54:26.038403 systemd-logind[1437]: Session 26 logged out. Waiting for processes to exit. Nov 12 22:54:26.040538 systemd-logind[1437]: Removed session 26. Nov 12 22:54:26.818341 kubelet[2683]: E1112 22:54:26.818283 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:29.817818 kubelet[2683]: E1112 22:54:29.817782 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:29.818283 kubelet[2683]: E1112 22:54:29.817886 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:31.040190 systemd[1]: Started sshd@26-10.0.0.135:22-10.0.0.1:36740.service - OpenSSH per-connection server daemon (10.0.0.1:36740). Nov 12 22:54:31.083119 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 36740 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:31.084634 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:31.088680 systemd-logind[1437]: New session 27 of user core. Nov 12 22:54:31.100319 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 22:54:31.219730 sshd[5700]: Connection closed by 10.0.0.1 port 36740 Nov 12 22:54:31.220588 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:31.225012 systemd[1]: sshd@26-10.0.0.135:22-10.0.0.1:36740.service: Deactivated successfully. Nov 12 22:54:31.227188 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 22:54:31.227803 systemd-logind[1437]: Session 27 logged out. Waiting for processes to exit. Nov 12 22:54:31.228664 systemd-logind[1437]: Removed session 27. Nov 12 22:54:35.735523 kubelet[2683]: I1112 22:54:35.732800 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:54:36.237173 systemd[1]: Started sshd@27-10.0.0.135:22-10.0.0.1:36748.service - OpenSSH per-connection server daemon (10.0.0.1:36748). Nov 12 22:54:36.282846 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 36748 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:36.284438 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:36.288389 systemd-logind[1437]: New session 28 of user core. Nov 12 22:54:36.293246 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 22:54:36.402401 sshd[5726]: Connection closed by 10.0.0.1 port 36748 Nov 12 22:54:36.402933 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:36.406740 systemd[1]: sshd@27-10.0.0.135:22-10.0.0.1:36748.service: Deactivated successfully. Nov 12 22:54:36.408737 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 22:54:36.409536 systemd-logind[1437]: Session 28 logged out. Waiting for processes to exit. Nov 12 22:54:36.410547 systemd-logind[1437]: Removed session 28. Nov 12 22:54:41.414121 systemd[1]: Started sshd@28-10.0.0.135:22-10.0.0.1:34558.service - OpenSSH per-connection server daemon (10.0.0.1:34558). Nov 12 22:54:41.458205 sshd[5760]: Accepted publickey for core from 10.0.0.1 port 34558 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:54:41.459854 sshd-session[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:41.464905 systemd-logind[1437]: New session 29 of user core. Nov 12 22:54:41.472290 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 22:54:41.598985 sshd[5764]: Connection closed by 10.0.0.1 port 34558 Nov 12 22:54:41.599360 sshd-session[5760]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:41.603028 systemd[1]: sshd@28-10.0.0.135:22-10.0.0.1:34558.service: Deactivated successfully. Nov 12 22:54:41.605056 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 22:54:41.605731 systemd-logind[1437]: Session 29 logged out. Waiting for processes to exit. Nov 12 22:54:41.606615 systemd-logind[1437]: Removed session 29.