Sep 4 17:40:16.894946 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:40:16.894967 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:40:16.894979 kernel: BIOS-provided physical RAM map: Sep 4 17:40:16.894985 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:40:16.894991 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 17:40:16.894997 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 17:40:16.895005 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 17:40:16.895011 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 17:40:16.895017 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 4 17:40:16.895023 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 4 17:40:16.895032 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 4 17:40:16.895038 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 4 17:40:16.895044 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 4 17:40:16.895051 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 4 17:40:16.895059 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 4 17:40:16.895070 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 17:40:16.895079 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 4 17:40:16.895086 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 4 17:40:16.895093 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 17:40:16.895099 kernel: NX (Execute Disable) protection: active Sep 4 17:40:16.895109 kernel: APIC: Static calls initialized Sep 4 17:40:16.895116 kernel: efi: EFI v2.7 by EDK II Sep 4 17:40:16.895123 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4ea018 Sep 4 17:40:16.895129 kernel: SMBIOS 2.8 present. Sep 4 17:40:16.895136 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Sep 4 17:40:16.895143 kernel: Hypervisor detected: KVM Sep 4 17:40:16.895149 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:40:16.895159 kernel: kvm-clock: using sched offset of 6235657187 cycles Sep 4 17:40:16.895166 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:40:16.895173 kernel: tsc: Detected 2794.744 MHz processor Sep 4 17:40:16.895180 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:40:16.895188 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:40:16.895195 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 4 17:40:16.895201 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:40:16.895208 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:40:16.895215 kernel: Using GB pages for direct mapping Sep 4 17:40:16.895225 kernel: Secure boot disabled Sep 4 17:40:16.895232 kernel: ACPI: Early table checksum verification disabled Sep 4 17:40:16.895238 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 4 17:40:16.895246 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:40:16.895281 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:40:16.895288 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:40:16.895296 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 4 17:40:16.895306 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:40:16.895314 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:40:16.895321 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:40:16.895328 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 17:40:16.895336 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Sep 4 17:40:16.895343 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Sep 4 17:40:16.895350 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 4 17:40:16.895359 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Sep 4 17:40:16.895367 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Sep 4 17:40:16.895374 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Sep 4 17:40:16.895381 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Sep 4 17:40:16.895388 kernel: No NUMA configuration found Sep 4 17:40:16.895395 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 4 17:40:16.895402 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 4 17:40:16.895409 kernel: Zone ranges: Sep 4 17:40:16.895417 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:40:16.895427 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 4 17:40:16.895434 kernel: Normal empty Sep 4 17:40:16.895441 kernel: Movable zone start for each node Sep 4 17:40:16.895448 kernel: Early memory node ranges Sep 4 17:40:16.895456 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:40:16.895463 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 4 17:40:16.895470 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 4 17:40:16.895477 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 4 17:40:16.895487 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 4 17:40:16.895494 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 4 17:40:16.895504 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 4 17:40:16.895511 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:40:16.895519 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:40:16.895526 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 4 17:40:16.895533 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:40:16.895542 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 4 17:40:16.895550 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 4 17:40:16.895560 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 4 17:40:16.895567 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:40:16.895577 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:40:16.895584 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:40:16.895591 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:40:16.895598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:40:16.895605 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:40:16.895613 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:40:16.895620 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:40:16.895627 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:40:16.895634 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:40:16.895644 kernel: TSC deadline timer available Sep 4 17:40:16.895651 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 17:40:16.895658 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:40:16.895665 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 17:40:16.895672 kernel: kvm-guest: setup PV sched yield Sep 4 17:40:16.895679 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Sep 4 17:40:16.895686 kernel: Booting paravirtualized kernel on KVM Sep 4 17:40:16.895694 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:40:16.895701 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 17:40:16.895711 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Sep 4 17:40:16.895718 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Sep 4 17:40:16.895725 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 17:40:16.895732 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:40:16.895739 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:40:16.895748 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:40:16.895755 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:40:16.895763 kernel: random: crng init done Sep 4 17:40:16.895772 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:40:16.895779 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:40:16.895786 kernel: Fallback order for Node 0: 0 Sep 4 17:40:16.895794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 4 17:40:16.895808 kernel: Policy zone: DMA32 Sep 4 17:40:16.895815 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:40:16.895823 kernel: Memory: 2394288K/2567000K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 172452K reserved, 0K cma-reserved) Sep 4 17:40:16.895830 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:40:16.895837 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:40:16.895849 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:40:16.895857 kernel: Dynamic Preempt: voluntary Sep 4 17:40:16.895864 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:40:16.895872 kernel: rcu: RCU event tracing is enabled. Sep 4 17:40:16.895879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:40:16.895896 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:40:16.895904 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:40:16.895912 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:40:16.895919 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:40:16.895927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:40:16.895935 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 17:40:16.895942 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:40:16.895952 kernel: Console: colour dummy device 80x25 Sep 4 17:40:16.895960 kernel: printk: console [ttyS0] enabled Sep 4 17:40:16.895967 kernel: ACPI: Core revision 20230628 Sep 4 17:40:16.895975 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 17:40:16.895983 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:40:16.895995 kernel: x2apic enabled Sep 4 17:40:16.896003 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:40:16.896011 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 17:40:16.896019 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 17:40:16.896027 kernel: kvm-guest: setup PV IPIs Sep 4 17:40:16.896034 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:40:16.896042 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:40:16.896049 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Sep 4 17:40:16.896057 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 17:40:16.896067 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 17:40:16.896074 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 17:40:16.896082 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:40:16.896090 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:40:16.896097 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:40:16.896105 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:40:16.896113 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 17:40:16.896120 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 17:40:16.896128 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:40:16.896138 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:40:16.896145 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 17:40:16.896154 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 17:40:16.896161 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 17:40:16.896169 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:40:16.896177 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:40:16.896184 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:40:16.896192 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:40:16.896202 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 17:40:16.896209 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:40:16.896217 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:40:16.896224 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:40:16.896234 kernel: landlock: Up and running. Sep 4 17:40:16.896242 kernel: SELinux: Initializing. Sep 4 17:40:16.896249 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:40:16.896290 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:40:16.896298 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 17:40:16.896310 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:40:16.896317 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:40:16.896325 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:40:16.896333 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 17:40:16.896340 kernel: ... version: 0 Sep 4 17:40:16.896348 kernel: ... bit width: 48 Sep 4 17:40:16.896355 kernel: ... generic registers: 6 Sep 4 17:40:16.896363 kernel: ... value mask: 0000ffffffffffff Sep 4 17:40:16.896370 kernel: ... max period: 00007fffffffffff Sep 4 17:40:16.896380 kernel: ... fixed-purpose events: 0 Sep 4 17:40:16.896387 kernel: ... event mask: 000000000000003f Sep 4 17:40:16.896395 kernel: signal: max sigframe size: 1776 Sep 4 17:40:16.896402 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:40:16.896410 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:40:16.896418 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:40:16.896425 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:40:16.896432 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 17:40:16.896440 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:40:16.896450 kernel: smpboot: Max logical packages: 1 Sep 4 17:40:16.896457 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Sep 4 17:40:16.896465 kernel: devtmpfs: initialized Sep 4 17:40:16.896472 kernel: x86/mm: Memory block size: 128MB Sep 4 17:40:16.896480 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 4 17:40:16.896488 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 4 17:40:16.896498 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 4 17:40:16.896506 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 4 17:40:16.896514 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 4 17:40:16.896524 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:40:16.896531 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:40:16.896539 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:40:16.896546 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:40:16.896554 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:40:16.896562 kernel: audit: type=2000 audit(1725471616.131:1): state=initialized audit_enabled=0 res=1 Sep 4 17:40:16.896569 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:40:16.896577 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:40:16.896584 kernel: cpuidle: using governor menu Sep 4 17:40:16.896594 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:40:16.896601 kernel: dca service started, version 1.12.1 Sep 4 17:40:16.896609 kernel: PCI: Using configuration type 1 for base access Sep 4 17:40:16.896616 kernel: PCI: Using configuration type 1 for extended access Sep 4 17:40:16.896624 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:40:16.896634 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:40:16.896641 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:40:16.896649 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:40:16.896657 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:40:16.896667 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:40:16.896674 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:40:16.896682 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:40:16.896689 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:40:16.896697 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:40:16.896704 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:40:16.896712 kernel: ACPI: Interpreter enabled Sep 4 17:40:16.896719 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:40:16.896727 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:40:16.896736 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:40:16.896744 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:40:16.896752 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:40:16.896759 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:40:16.896970 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:40:16.896983 kernel: acpiphp: Slot [3] registered Sep 4 17:40:16.896991 kernel: acpiphp: Slot [4] registered Sep 4 17:40:16.896998 kernel: acpiphp: Slot [5] registered Sep 4 17:40:16.897009 kernel: acpiphp: Slot [6] registered Sep 4 17:40:16.897017 kernel: acpiphp: Slot [7] registered Sep 4 17:40:16.897024 kernel: acpiphp: Slot [8] registered Sep 4 17:40:16.897032 kernel: acpiphp: Slot [9] registered Sep 4 17:40:16.897039 kernel: acpiphp: Slot [10] registered Sep 4 17:40:16.897047 kernel: acpiphp: Slot [11] registered Sep 4 17:40:16.897054 kernel: acpiphp: Slot [12] registered Sep 4 17:40:16.897062 kernel: acpiphp: Slot [13] registered Sep 4 17:40:16.897070 kernel: acpiphp: Slot [14] registered Sep 4 17:40:16.897077 kernel: acpiphp: Slot [15] registered Sep 4 17:40:16.897087 kernel: acpiphp: Slot [16] registered Sep 4 17:40:16.897094 kernel: acpiphp: Slot [17] registered Sep 4 17:40:16.897102 kernel: acpiphp: Slot [18] registered Sep 4 17:40:16.897109 kernel: acpiphp: Slot [19] registered Sep 4 17:40:16.897116 kernel: acpiphp: Slot [20] registered Sep 4 17:40:16.897124 kernel: acpiphp: Slot [21] registered Sep 4 17:40:16.897131 kernel: acpiphp: Slot [22] registered Sep 4 17:40:16.897138 kernel: acpiphp: Slot [23] registered Sep 4 17:40:16.897146 kernel: acpiphp: Slot [24] registered Sep 4 17:40:16.897156 kernel: acpiphp: Slot [25] registered Sep 4 17:40:16.897163 kernel: acpiphp: Slot [26] registered Sep 4 17:40:16.897171 kernel: acpiphp: Slot [27] registered Sep 4 17:40:16.897178 kernel: acpiphp: Slot [28] registered Sep 4 17:40:16.897185 kernel: acpiphp: Slot [29] registered Sep 4 17:40:16.897193 kernel: acpiphp: Slot [30] registered Sep 4 17:40:16.897200 kernel: acpiphp: Slot [31] registered Sep 4 17:40:16.897208 kernel: PCI host bridge to bus 0000:00 Sep 4 17:40:16.897375 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:40:16.897500 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:40:16.897616 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:40:16.897732 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Sep 4 17:40:16.897861 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Sep 4 17:40:16.897978 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:40:16.898143 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:40:16.898312 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:40:16.898464 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:40:16.898593 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Sep 4 17:40:16.898719 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:40:16.898852 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:40:16.898979 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:40:16.899106 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:40:16.899291 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:40:16.899425 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:40:16.899552 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Sep 4 17:40:16.899695 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Sep 4 17:40:16.899830 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 4 17:40:16.899956 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Sep 4 17:40:16.900081 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 4 17:40:16.900214 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Sep 4 17:40:16.900358 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:40:16.900534 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:40:16.900664 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Sep 4 17:40:16.900791 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 4 17:40:16.900925 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 4 17:40:16.901073 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:40:16.901207 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:40:16.901366 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 4 17:40:16.901495 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 4 17:40:16.901641 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:40:16.901769 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:40:16.901925 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Sep 4 17:40:16.902061 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 4 17:40:16.902188 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 4 17:40:16.902199 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:40:16.902207 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:40:16.902215 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:40:16.902222 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:40:16.902230 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:40:16.902237 kernel: iommu: Default domain type: Translated Sep 4 17:40:16.902245 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:40:16.902314 kernel: efivars: Registered efivars operations Sep 4 17:40:16.902322 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:40:16.902330 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:40:16.902338 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 4 17:40:16.902345 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 4 17:40:16.902353 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 4 17:40:16.902360 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 4 17:40:16.902534 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:40:16.902752 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:40:16.902896 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:40:16.902907 kernel: vgaarb: loaded Sep 4 17:40:16.902915 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 17:40:16.902922 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 17:40:16.902930 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:40:16.902937 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:40:16.902945 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:40:16.902953 kernel: pnp: PnP ACPI init Sep 4 17:40:16.903112 kernel: pnp 00:02: [dma 2] Sep 4 17:40:16.903127 kernel: pnp: PnP ACPI: found 6 devices Sep 4 17:40:16.903135 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:40:16.903143 kernel: NET: Registered PF_INET protocol family Sep 4 17:40:16.903151 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:40:16.903158 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:40:16.903166 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:40:16.903174 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:40:16.903181 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:40:16.903192 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:40:16.903200 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:40:16.903208 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:40:16.903215 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:40:16.903223 kernel: NET: Registered PF_XDP protocol family Sep 4 17:40:16.903373 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 4 17:40:16.903501 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 4 17:40:16.903618 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:40:16.903740 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:40:16.903864 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:40:16.903980 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Sep 4 17:40:16.904095 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Sep 4 17:40:16.904222 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:40:16.904369 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:40:16.904380 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:40:16.904387 kernel: Initialise system trusted keyrings Sep 4 17:40:16.904400 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:40:16.904407 kernel: Key type asymmetric registered Sep 4 17:40:16.904415 kernel: Asymmetric key parser 'x509' registered Sep 4 17:40:16.904422 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:40:16.904430 kernel: io scheduler mq-deadline registered Sep 4 17:40:16.904437 kernel: io scheduler kyber registered Sep 4 17:40:16.904445 kernel: io scheduler bfq registered Sep 4 17:40:16.904452 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:40:16.904461 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:40:16.904472 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:40:16.904479 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:40:16.904487 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:40:16.904495 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:40:16.904519 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:40:16.904530 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:40:16.904540 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:40:16.904681 kernel: rtc_cmos 00:05: RTC can wake from S4 Sep 4 17:40:16.904702 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:40:16.904867 kernel: rtc_cmos 00:05: registered as rtc0 Sep 4 17:40:16.904988 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:40:16 UTC (1725471616) Sep 4 17:40:16.905107 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 17:40:16.905118 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:40:16.905126 kernel: efifb: probing for efifb Sep 4 17:40:16.905134 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 4 17:40:16.905142 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 4 17:40:16.905150 kernel: efifb: scrolling: redraw Sep 4 17:40:16.905162 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 4 17:40:16.905170 kernel: Console: switching to colour frame buffer device 100x37 Sep 4 17:40:16.905178 kernel: fb0: EFI VGA frame buffer device Sep 4 17:40:16.905186 kernel: pstore: Using crash dump compression: deflate Sep 4 17:40:16.905194 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:40:16.905202 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:40:16.905210 kernel: Segment Routing with IPv6 Sep 4 17:40:16.905217 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:40:16.905225 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:40:16.905236 kernel: Key type dns_resolver registered Sep 4 17:40:16.905246 kernel: IPI shorthand broadcast: enabled Sep 4 17:40:16.905276 kernel: sched_clock: Marking stable (938003067, 110999513)->(1200166305, -151163725) Sep 4 17:40:16.905285 kernel: registered taskstats version 1 Sep 4 17:40:16.905293 kernel: Loading compiled-in X.509 certificates Sep 4 17:40:16.905301 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:40:16.905312 kernel: Key type .fscrypt registered Sep 4 17:40:16.905320 kernel: Key type fscrypt-provisioning registered Sep 4 17:40:16.905328 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:40:16.905336 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:40:16.905344 kernel: ima: No architecture policies found Sep 4 17:40:16.905352 kernel: clk: Disabling unused clocks Sep 4 17:40:16.905360 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:40:16.905368 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:40:16.905378 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:40:16.905386 kernel: Run /init as init process Sep 4 17:40:16.905394 kernel: with arguments: Sep 4 17:40:16.905402 kernel: /init Sep 4 17:40:16.905410 kernel: with environment: Sep 4 17:40:16.905418 kernel: HOME=/ Sep 4 17:40:16.905425 kernel: TERM=linux Sep 4 17:40:16.905433 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:40:16.905443 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:40:16.905457 systemd[1]: Detected virtualization kvm. Sep 4 17:40:16.905465 systemd[1]: Detected architecture x86-64. Sep 4 17:40:16.905473 systemd[1]: Running in initrd. Sep 4 17:40:16.905482 systemd[1]: No hostname configured, using default hostname. Sep 4 17:40:16.905490 systemd[1]: Hostname set to . Sep 4 17:40:16.905498 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:40:16.905507 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:40:16.905518 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:40:16.905526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:40:16.905535 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:40:16.905544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:40:16.905553 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:40:16.905561 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:40:16.905571 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:40:16.905583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:40:16.905591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:40:16.905600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:40:16.905608 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:40:16.905617 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:40:16.905625 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:40:16.905634 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:40:16.905642 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:40:16.905653 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:40:16.905662 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:40:16.905673 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:40:16.905682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:40:16.905690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:40:16.905699 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:40:16.905707 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:40:16.905716 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:40:16.905724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:40:16.905735 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:40:16.905744 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:40:16.905752 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:40:16.905761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:40:16.905769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:16.905782 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:40:16.905791 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:40:16.905814 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:40:16.905850 systemd-journald[192]: Collecting audit messages is disabled. Sep 4 17:40:16.905881 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:40:16.905902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:16.905917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:40:16.905934 systemd-journald[192]: Journal started Sep 4 17:40:16.905962 systemd-journald[192]: Runtime Journal (/run/log/journal/85cfdf1ac64e4e9ebd3da1101a78f28c) is 6.0M, max 48.3M, 42.2M free. Sep 4 17:40:16.907396 systemd-modules-load[194]: Inserted module 'overlay' Sep 4 17:40:16.908748 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:40:16.911364 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:40:16.920467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:40:16.924145 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:40:16.926949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:16.931918 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:40:16.934131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:40:16.938410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:40:16.944278 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:40:16.947067 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 4 17:40:16.948011 kernel: Bridge firewalling registered Sep 4 17:40:16.948787 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:40:16.951200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:40:16.959178 dracut-cmdline[221]: dracut-dracut-053 Sep 4 17:40:16.962655 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:40:16.976138 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:40:16.984496 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:40:17.024572 systemd-resolved[248]: Positive Trust Anchors: Sep 4 17:40:17.024595 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:40:17.024640 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:40:17.027901 systemd-resolved[248]: Defaulting to hostname 'linux'. Sep 4 17:40:17.029329 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:40:17.034428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:40:17.072287 kernel: SCSI subsystem initialized Sep 4 17:40:17.081282 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:40:17.092312 kernel: iscsi: registered transport (tcp) Sep 4 17:40:17.113298 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:40:17.113357 kernel: QLogic iSCSI HBA Driver Sep 4 17:40:17.169717 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:40:17.179451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:40:17.209307 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:40:17.209376 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:40:17.210590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:40:17.256304 kernel: raid6: avx2x4 gen() 29493 MB/s Sep 4 17:40:17.273308 kernel: raid6: avx2x2 gen() 31110 MB/s Sep 4 17:40:17.290392 kernel: raid6: avx2x1 gen() 25790 MB/s Sep 4 17:40:17.290419 kernel: raid6: using algorithm avx2x2 gen() 31110 MB/s Sep 4 17:40:17.308409 kernel: raid6: .... xor() 19828 MB/s, rmw enabled Sep 4 17:40:17.308489 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:40:17.330300 kernel: xor: automatically using best checksumming function avx Sep 4 17:40:17.489303 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:40:17.506329 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:40:17.513448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:40:17.539052 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 4 17:40:17.545981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:40:17.553421 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:40:17.569246 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Sep 4 17:40:17.607591 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:40:17.621445 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:40:17.689138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:40:17.698480 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:40:17.714649 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:40:17.717699 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:40:17.720318 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:40:17.722724 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:40:17.732279 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 17:40:17.733584 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:40:17.739421 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:40:17.746376 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:40:17.756918 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:40:17.756981 kernel: GPT:9289727 != 19775487 Sep 4 17:40:17.756998 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:40:17.757013 kernel: GPT:9289727 != 19775487 Sep 4 17:40:17.757027 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:40:17.757054 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:40:17.757638 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:40:17.764098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:40:17.764312 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:17.767472 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:40:17.768832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:17.769044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:17.770524 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:17.787174 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:40:17.787201 kernel: AES CTR mode by8 optimization enabled Sep 4 17:40:17.787992 kernel: libata version 3.00 loaded. Sep 4 17:40:17.789718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:17.797280 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:40:17.797535 kernel: scsi host0: ata_piix Sep 4 17:40:17.797738 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (473) Sep 4 17:40:17.797754 kernel: scsi host1: ata_piix Sep 4 17:40:17.797961 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Sep 4 17:40:17.797983 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Sep 4 17:40:17.797998 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Sep 4 17:40:17.816929 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:40:17.826558 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:40:17.841483 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:40:17.842807 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:40:17.852014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:40:17.866432 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:40:17.869217 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:17.869310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:17.873046 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:17.876195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:17.892167 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:17.908576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:40:17.931368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:17.949763 disk-uuid[542]: Primary Header is updated. Sep 4 17:40:17.949763 disk-uuid[542]: Secondary Entries is updated. Sep 4 17:40:17.949763 disk-uuid[542]: Secondary Header is updated. Sep 4 17:40:17.954680 kernel: ata2: found unknown device (class 0) Sep 4 17:40:17.954714 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:40:17.954729 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 17:40:17.958282 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 17:40:17.961281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:40:18.003599 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 17:40:18.003959 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:40:18.020333 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 17:40:18.960992 disk-uuid[556]: The operation has completed successfully. Sep 4 17:40:18.962657 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:40:18.990787 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:40:18.990947 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:40:19.015429 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:40:19.020865 sh[583]: Success Sep 4 17:40:19.033337 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 17:40:19.073299 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:40:19.092345 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:40:19.095730 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:40:19.117850 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:40:19.117897 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:19.117909 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:40:19.119810 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:40:19.119834 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:40:19.125079 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:40:19.125927 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:40:19.135422 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:40:19.137550 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:40:19.148119 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:19.148152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:19.148163 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:40:19.152299 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:40:19.161705 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:40:19.163541 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:19.253536 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:40:19.268498 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:40:19.272705 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:40:19.275769 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:40:19.293666 systemd-networkd[761]: lo: Link UP Sep 4 17:40:19.293676 systemd-networkd[761]: lo: Gained carrier Sep 4 17:40:19.296917 systemd-networkd[761]: Enumeration completed Sep 4 17:40:19.297178 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:40:19.300120 systemd[1]: Reached target network.target - Network. Sep 4 17:40:19.301523 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:19.301527 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:40:19.302326 systemd-networkd[761]: eth0: Link UP Sep 4 17:40:19.302330 systemd-networkd[761]: eth0: Gained carrier Sep 4 17:40:19.302338 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:19.319331 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:40:19.344922 ignition[764]: Ignition 2.19.0 Sep 4 17:40:19.344937 ignition[764]: Stage: fetch-offline Sep 4 17:40:19.344980 ignition[764]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:19.344993 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:40:19.345104 ignition[764]: parsed url from cmdline: "" Sep 4 17:40:19.345110 ignition[764]: no config URL provided Sep 4 17:40:19.345117 ignition[764]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:40:19.345130 ignition[764]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:40:19.345171 ignition[764]: op(1): [started] loading QEMU firmware config module Sep 4 17:40:19.345179 ignition[764]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:40:19.354997 ignition[764]: op(1): [finished] loading QEMU firmware config module Sep 4 17:40:19.394334 ignition[764]: parsing config with SHA512: 08696b1936d51bf8d6a0b7647e53125cc72919591dacd900ce8ae697ac5a785210ae7674eb57f3a607d5f19f061d88c32bdc14b85ae928f7e16d14bdc5dab0be Sep 4 17:40:19.398119 unknown[764]: fetched base config from "system" Sep 4 17:40:19.398134 unknown[764]: fetched user config from "qemu" Sep 4 17:40:19.398497 ignition[764]: fetch-offline: fetch-offline passed Sep 4 17:40:19.398560 ignition[764]: Ignition finished successfully Sep 4 17:40:19.402005 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:40:19.405020 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:40:19.417506 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:40:19.432354 ignition[778]: Ignition 2.19.0 Sep 4 17:40:19.432365 ignition[778]: Stage: kargs Sep 4 17:40:19.434281 ignition[778]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:19.434301 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:40:19.435155 ignition[778]: kargs: kargs passed Sep 4 17:40:19.435214 ignition[778]: Ignition finished successfully Sep 4 17:40:19.440940 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:40:19.451674 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:40:19.470813 ignition[786]: Ignition 2.19.0 Sep 4 17:40:19.470825 ignition[786]: Stage: disks Sep 4 17:40:19.471016 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:19.471029 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:40:19.471854 ignition[786]: disks: disks passed Sep 4 17:40:19.471902 ignition[786]: Ignition finished successfully Sep 4 17:40:19.493673 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:40:19.496011 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:40:19.496110 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:40:19.498303 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:40:19.500630 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:40:19.501006 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:40:19.514416 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:40:19.538186 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:40:19.712910 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:40:19.721389 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:40:19.823297 kernel: EXT4-fs (vda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:40:19.824284 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:40:19.824972 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:40:19.837344 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:40:19.839410 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:40:19.840651 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:40:19.840690 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:40:19.848343 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Sep 4 17:40:19.848364 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:19.840716 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:40:19.854782 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:19.854798 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:40:19.854810 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:40:19.848084 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:40:19.853058 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:40:19.855986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:40:19.912966 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:40:19.918470 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:40:19.924057 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:40:19.929068 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:40:20.020311 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:40:20.037393 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:40:20.039284 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:40:20.065299 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:20.084389 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:40:20.117636 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:40:20.118746 ignition[924]: INFO : Ignition 2.19.0 Sep 4 17:40:20.118746 ignition[924]: INFO : Stage: mount Sep 4 17:40:20.120515 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:20.120515 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:40:20.120515 ignition[924]: INFO : mount: mount passed Sep 4 17:40:20.120515 ignition[924]: INFO : Ignition finished successfully Sep 4 17:40:20.123014 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:40:20.139440 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:40:20.146334 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:40:20.160255 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (933) Sep 4 17:40:20.160315 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:20.160341 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:20.161811 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:40:20.165291 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:40:20.166602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:40:20.277725 ignition[950]: INFO : Ignition 2.19.0 Sep 4 17:40:20.277725 ignition[950]: INFO : Stage: files Sep 4 17:40:20.279829 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:20.279829 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:40:20.279829 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:40:20.279829 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:40:20.279829 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:40:20.286865 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:40:20.286865 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:40:20.286865 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:40:20.286865 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:40:20.286865 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:40:20.282184 unknown[950]: wrote ssh authorized keys file for user: core Sep 4 17:40:20.362963 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:40:20.521684 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:40:20.523655 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:40:20.525447 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:40:20.799486 systemd-networkd[761]: eth0: Gained IPv6LL Sep 4 17:40:20.984674 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:40:21.540067 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:40:21.540067 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:40:21.544516 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:40:21.544516 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:40:21.544516 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:40:21.544516 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 17:40:21.544516 ignition[950]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:40:21.544516 ignition[950]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:40:21.544516 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 17:40:21.544516 ignition[950]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:40:21.576985 ignition[950]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:40:21.585358 ignition[950]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:40:21.586970 ignition[950]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:40:21.586970 ignition[950]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:40:21.586970 ignition[950]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:40:21.586970 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:40:21.586970 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:40:21.586970 ignition[950]: INFO : files: files passed Sep 4 17:40:21.586970 ignition[950]: INFO : Ignition finished successfully Sep 4 17:40:21.588074 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:40:21.599448 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:40:21.601468 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:40:21.603613 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:40:21.603754 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:40:21.611781 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:40:21.614584 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:40:21.616382 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:40:21.619154 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:40:21.617737 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:40:21.619385 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:40:21.630424 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:40:21.655059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:40:21.655198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:40:21.657586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:40:21.659683 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:40:21.661984 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:40:21.672394 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:40:21.687973 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:40:21.689526 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:40:21.705485 systemd[1]: Stopped target network.target - Network. Sep 4 17:40:21.706520 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:40:21.708501 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:40:21.710856 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:40:21.712902 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:40:21.713023 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:40:21.715505 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:40:21.717148 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:40:21.719209 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:40:21.721317 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:40:21.723379 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:40:21.725770 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:40:21.727873 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:40:21.730189 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:40:21.732207 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:40:21.734415 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:40:21.736250 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:40:21.736394 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:40:21.738761 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:40:21.740186 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:40:21.742355 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:40:21.742443 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:40:21.744641 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:40:21.744770 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:40:21.747231 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:40:21.747377 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:40:21.749281 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:40:21.751070 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:40:21.753380 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:40:21.754851 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:40:21.756752 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:40:21.758706 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:40:21.758844 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:40:21.761075 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:40:21.761176 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:40:21.763002 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:40:21.763167 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:40:21.765299 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:40:21.765447 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:40:21.782448 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:40:21.782567 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:40:21.782729 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:40:21.784005 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:40:21.784516 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:40:21.784994 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:40:21.785549 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:40:21.785714 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:40:21.786152 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:40:21.786524 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:40:21.789790 systemd-networkd[761]: eth0: DHCPv6 lease lost Sep 4 17:40:21.795614 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:40:21.795796 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:40:21.801813 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:40:21.802365 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:40:21.808439 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:40:21.824345 ignition[1005]: INFO : Ignition 2.19.0 Sep 4 17:40:21.824345 ignition[1005]: INFO : Stage: umount Sep 4 17:40:21.808689 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:40:21.819254 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:40:21.828217 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:21.828217 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:40:21.828217 ignition[1005]: INFO : umount: umount passed Sep 4 17:40:21.828217 ignition[1005]: INFO : Ignition finished successfully Sep 4 17:40:21.819352 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:40:21.835471 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:40:21.835597 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:40:21.835699 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:40:21.838777 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:40:21.838841 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:40:21.839944 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:40:21.839998 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:40:21.840550 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:40:21.840602 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:40:21.842103 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:40:21.842765 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:40:21.842892 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:40:21.849393 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:40:21.849507 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:40:21.850899 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:40:21.850951 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:40:21.852169 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:40:21.852218 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:40:21.854469 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:40:21.854520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:40:21.857146 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:40:21.883588 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:40:21.883805 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:40:21.885717 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:40:21.885792 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:40:21.887762 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:40:21.887804 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:40:21.891441 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:40:21.891504 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:40:21.892916 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:40:21.893773 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:40:21.895322 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:40:21.895423 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:21.899147 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:40:21.901475 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:40:21.901543 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:40:21.904158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:21.904215 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:21.907076 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:40:21.907196 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:40:21.911929 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:40:21.912072 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:40:22.089819 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:40:22.090008 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:40:22.091392 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:40:22.092787 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:40:22.092859 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:40:22.104502 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:40:22.114852 systemd[1]: Switching root. Sep 4 17:40:22.147148 systemd-journald[192]: Journal stopped Sep 4 17:40:23.377907 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 4 17:40:23.378000 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:40:23.378015 kernel: SELinux: policy capability open_perms=1 Sep 4 17:40:23.378027 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:40:23.378039 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:40:23.378050 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:40:23.378069 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:40:23.378081 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:40:23.378092 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:40:23.378109 kernel: audit: type=1403 audit(1725471622.587:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:40:23.378129 systemd[1]: Successfully loaded SELinux policy in 39.959ms. Sep 4 17:40:23.378160 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.284ms. Sep 4 17:40:23.378174 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:40:23.378186 systemd[1]: Detected virtualization kvm. Sep 4 17:40:23.378199 systemd[1]: Detected architecture x86-64. Sep 4 17:40:23.378213 systemd[1]: Detected first boot. Sep 4 17:40:23.378225 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:40:23.378238 zram_generator::config[1048]: No configuration found. Sep 4 17:40:23.378270 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:40:23.378284 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:40:23.378296 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:40:23.378309 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:40:23.378321 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:40:23.378334 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:40:23.378346 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:40:23.378358 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:40:23.378371 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:40:23.378387 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:40:23.378400 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:40:23.378412 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:40:23.378425 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:40:23.378437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:40:23.378449 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:40:23.378462 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:40:23.378480 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:40:23.378499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:40:23.378513 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:40:23.378525 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:40:23.378538 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:40:23.378550 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:40:23.378567 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:40:23.378580 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:40:23.378593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:40:23.378608 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:40:23.378620 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:40:23.378642 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:40:23.378655 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:40:23.378667 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:40:23.378680 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:40:23.378692 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:40:23.378705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:40:23.378717 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:40:23.378736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:40:23.378748 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:40:23.378760 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:40:23.378773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:23.378785 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:40:23.378798 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:40:23.378811 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:40:23.378823 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:40:23.378836 systemd[1]: Reached target machines.target - Containers. Sep 4 17:40:23.378851 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:40:23.378864 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:40:23.378877 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:40:23.378889 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:40:23.378901 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:40:23.378913 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:40:23.378925 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:40:23.378938 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:40:23.378953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:40:23.378965 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:40:23.378978 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:40:23.378990 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:40:23.379002 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:40:23.379015 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:40:23.379027 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:40:23.379039 kernel: loop: module loaded Sep 4 17:40:23.379051 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:40:23.379067 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:40:23.379080 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:40:23.379092 kernel: ACPI: bus type drm_connector registered Sep 4 17:40:23.379104 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:40:23.379116 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:40:23.379128 systemd[1]: Stopped verity-setup.service. Sep 4 17:40:23.379141 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:23.379153 kernel: fuse: init (API version 7.39) Sep 4 17:40:23.379185 systemd-journald[1117]: Collecting audit messages is disabled. Sep 4 17:40:23.379214 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:40:23.379231 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:40:23.379248 systemd-journald[1117]: Journal started Sep 4 17:40:23.379319 systemd-journald[1117]: Runtime Journal (/run/log/journal/85cfdf1ac64e4e9ebd3da1101a78f28c) is 6.0M, max 48.3M, 42.2M free. Sep 4 17:40:23.135795 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:40:23.158029 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:40:23.158703 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:40:23.382301 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:40:23.383819 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:40:23.385121 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:40:23.386372 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:40:23.387679 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:40:23.389000 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:40:23.390491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:40:23.392081 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:40:23.392304 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:40:23.393812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:40:23.393995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:40:23.395614 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:40:23.395798 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:40:23.397247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:40:23.397439 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:40:23.399001 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:40:23.399179 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:40:23.400591 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:40:23.400778 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:40:23.402164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:40:23.403766 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:40:23.405328 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:40:23.422694 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:40:23.431375 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:40:23.433912 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:40:23.435073 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:40:23.435107 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:40:23.437187 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:40:23.439668 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:40:23.442700 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:40:23.443891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:40:23.447511 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:40:23.450417 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:40:23.451861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:40:23.458417 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:40:23.460848 systemd-journald[1117]: Time spent on flushing to /var/log/journal/85cfdf1ac64e4e9ebd3da1101a78f28c is 29.663ms for 987 entries. Sep 4 17:40:23.460848 systemd-journald[1117]: System Journal (/var/log/journal/85cfdf1ac64e4e9ebd3da1101a78f28c) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:40:23.521828 systemd-journald[1117]: Received client request to flush runtime journal. Sep 4 17:40:23.521897 kernel: loop0: detected capacity change from 0 to 211296 Sep 4 17:40:23.459701 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:40:23.462180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:40:23.525287 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:40:23.468453 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:40:23.473199 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:40:23.476108 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:40:23.480559 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:40:23.482352 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:40:23.484313 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:40:23.488119 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:40:23.499508 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:40:23.508693 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:40:23.510666 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:40:23.522968 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:40:23.525815 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:40:23.533139 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:40:23.537619 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:40:23.538580 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:40:23.547221 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:40:23.557350 kernel: loop1: detected capacity change from 0 to 140728 Sep 4 17:40:23.557559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:40:23.581609 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 4 17:40:23.581639 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 4 17:40:23.588110 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:40:23.600285 kernel: loop2: detected capacity change from 0 to 89336 Sep 4 17:40:23.637302 kernel: loop3: detected capacity change from 0 to 211296 Sep 4 17:40:23.646278 kernel: loop4: detected capacity change from 0 to 140728 Sep 4 17:40:23.657281 kernel: loop5: detected capacity change from 0 to 89336 Sep 4 17:40:23.668203 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:40:23.669232 (sd-merge)[1185]: Merged extensions into '/usr'. Sep 4 17:40:23.675159 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:40:23.675179 systemd[1]: Reloading... Sep 4 17:40:23.725306 zram_generator::config[1209]: No configuration found. Sep 4 17:40:23.793870 ldconfig[1156]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:40:23.849476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:40:23.900799 systemd[1]: Reloading finished in 225 ms. Sep 4 17:40:23.935689 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:40:23.937319 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:40:23.951455 systemd[1]: Starting ensure-sysext.service... Sep 4 17:40:23.953547 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:40:23.963035 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:40:23.963052 systemd[1]: Reloading... Sep 4 17:40:23.982043 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:40:23.982635 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:40:23.984001 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:40:23.984916 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 4 17:40:23.985077 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 4 17:40:23.989566 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:40:23.992393 systemd-tmpfiles[1247]: Skipping /boot Sep 4 17:40:24.008489 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:40:24.008657 systemd-tmpfiles[1247]: Skipping /boot Sep 4 17:40:24.019314 zram_generator::config[1272]: No configuration found. Sep 4 17:40:24.139638 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:40:24.191388 systemd[1]: Reloading finished in 227 ms. Sep 4 17:40:24.211019 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:40:24.223011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:40:24.232680 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:40:24.235351 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:40:24.237839 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:40:24.242097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:40:24.244821 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:40:24.248547 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:40:24.252231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:24.253015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:40:24.259550 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:40:24.266535 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:40:24.270010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:40:24.271230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:40:24.271380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:24.272475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:40:24.272715 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:40:24.281414 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:40:24.281615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:40:24.283435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:40:24.283676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:40:24.285663 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:40:24.291557 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Sep 4 17:40:24.292211 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:40:24.298075 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:24.298732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:40:24.301335 augenrules[1340]: No rules Sep 4 17:40:24.306602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:40:24.309389 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:40:24.311691 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:40:24.314859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:40:24.317233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:40:24.319540 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:40:24.323032 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:40:24.324230 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:24.325418 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:40:24.327934 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:40:24.330470 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:40:24.332404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:40:24.332600 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:40:24.334365 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:40:24.334551 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:40:24.336525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:40:24.336718 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:40:24.338527 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:40:24.338719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:40:24.340657 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:40:24.350516 systemd[1]: Finished ensure-sysext.service. Sep 4 17:40:24.372490 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:40:24.373999 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:40:24.374731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:40:24.391440 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:40:24.393796 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:40:24.394064 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:40:24.420336 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1353) Sep 4 17:40:24.425716 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:40:24.432281 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1353) Sep 4 17:40:24.448296 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Sep 4 17:40:24.515324 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 17:40:24.536896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:40:24.540751 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Sep 4 17:40:24.543550 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:40:24.546519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:40:24.560026 systemd-networkd[1375]: lo: Link UP Sep 4 17:40:24.560036 systemd-networkd[1375]: lo: Gained carrier Sep 4 17:40:24.566489 systemd-networkd[1375]: Enumeration completed Sep 4 17:40:24.566673 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:40:24.566943 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:24.566951 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:40:24.569514 systemd-networkd[1375]: eth0: Link UP Sep 4 17:40:24.569524 systemd-networkd[1375]: eth0: Gained carrier Sep 4 17:40:24.569537 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:24.577008 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 17:40:24.575428 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:40:24.577429 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:40:24.581379 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:40:24.603815 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:40:25.263361 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:40:25.263430 systemd-timesyncd[1380]: Initial clock synchronization to Wed 2024-09-04 17:40:25.263203 UTC. Sep 4 17:40:25.263704 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:40:25.296539 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:40:25.296817 systemd-resolved[1315]: Positive Trust Anchors: Sep 4 17:40:25.296830 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:40:25.296863 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:40:25.306118 systemd-resolved[1315]: Defaulting to hostname 'linux'. Sep 4 17:40:25.309288 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:40:25.311473 systemd[1]: Reached target network.target - Network. Sep 4 17:40:25.313479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:40:25.362036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:25.371475 kernel: kvm_amd: TSC scaling supported Sep 4 17:40:25.371535 kernel: kvm_amd: Nested Virtualization enabled Sep 4 17:40:25.371550 kernel: kvm_amd: Nested Paging enabled Sep 4 17:40:25.372451 kernel: kvm_amd: LBR virtualization supported Sep 4 17:40:25.372467 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 17:40:25.373450 kernel: kvm_amd: Virtual GIF supported Sep 4 17:40:25.393344 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:40:25.426052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:25.427675 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:40:25.438663 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:40:25.447090 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:40:25.486508 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:40:25.488022 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:40:25.489134 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:40:25.490451 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:40:25.491778 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:40:25.493272 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:40:25.494546 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:40:25.495817 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:40:25.497076 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:40:25.497108 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:40:25.498017 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:40:25.499845 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:40:25.502602 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:40:25.517873 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:40:25.520354 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:40:25.521981 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:40:25.523354 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:40:25.524426 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:40:25.525458 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:40:25.525486 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:40:25.526644 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:40:25.528977 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:40:25.532584 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:40:25.536617 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:40:25.537827 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:40:25.539453 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:40:25.539497 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:40:25.543483 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:40:25.545172 jq[1420]: false Sep 4 17:40:25.547511 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:40:25.549753 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:40:25.554437 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:40:25.556847 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:40:25.557325 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:40:25.558233 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:40:25.563526 extend-filesystems[1421]: Found loop3 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found loop4 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found loop5 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found sr0 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found vda Sep 4 17:40:25.564485 extend-filesystems[1421]: Found vda1 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found vda2 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found vda3 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found usr Sep 4 17:40:25.564485 extend-filesystems[1421]: Found vda4 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found vda6 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found vda7 Sep 4 17:40:25.564485 extend-filesystems[1421]: Found vda9 Sep 4 17:40:25.564485 extend-filesystems[1421]: Checking size of /dev/vda9 Sep 4 17:40:25.565385 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:40:25.573510 dbus-daemon[1419]: [system] SELinux support is enabled Sep 4 17:40:25.575034 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:40:25.581359 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:40:25.593017 jq[1429]: true Sep 4 17:40:25.592872 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:40:25.593099 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:40:25.596755 extend-filesystems[1421]: Resized partition /dev/vda9 Sep 4 17:40:25.598129 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:40:25.599052 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:40:25.601114 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:40:25.601452 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:40:25.603348 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:40:25.605686 update_engine[1428]: I0904 17:40:25.605587 1428 main.cc:92] Flatcar Update Engine starting Sep 4 17:40:25.614575 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:40:25.614607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1378) Sep 4 17:40:25.614627 update_engine[1428]: I0904 17:40:25.610438 1428 update_check_scheduler.cc:74] Next update check in 7m49s Sep 4 17:40:25.628780 jq[1445]: true Sep 4 17:40:25.636337 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:40:25.640757 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:40:25.656340 tar[1442]: linux-amd64/helm Sep 4 17:40:25.660976 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:40:25.664985 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:40:25.665017 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:40:25.813533 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:40:25.813533 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:40:25.813533 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:40:25.666444 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:40:25.823412 extend-filesystems[1421]: Resized filesystem in /dev/vda9 Sep 4 17:40:25.666468 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:40:25.678817 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:40:25.756922 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:40:25.816703 systemd-logind[1427]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:40:25.816725 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:40:25.819021 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:40:25.819565 systemd-logind[1427]: New seat seat0. Sep 4 17:40:25.820068 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:40:25.823381 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:40:26.042949 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:40:26.047674 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:40:26.050970 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:40:26.217855 containerd[1446]: time="2024-09-04T17:40:26.217177469Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:40:26.269549 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:40:26.296947 containerd[1446]: time="2024-09-04T17:40:26.296858583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:26.299034 containerd[1446]: time="2024-09-04T17:40:26.298997588Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:26.299034 containerd[1446]: time="2024-09-04T17:40:26.299031582Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:40:26.299091 containerd[1446]: time="2024-09-04T17:40:26.299054304Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:40:26.299351 containerd[1446]: time="2024-09-04T17:40:26.299290628Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:40:26.299351 containerd[1446]: time="2024-09-04T17:40:26.299338638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:26.299463 containerd[1446]: time="2024-09-04T17:40:26.299439868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:26.299487 containerd[1446]: time="2024-09-04T17:40:26.299462671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:26.299781 containerd[1446]: time="2024-09-04T17:40:26.299745001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:26.299817 containerd[1446]: time="2024-09-04T17:40:26.299779435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:26.299817 containerd[1446]: time="2024-09-04T17:40:26.299800405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:26.299817 containerd[1446]: time="2024-09-04T17:40:26.299814311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:26.299999 containerd[1446]: time="2024-09-04T17:40:26.299966927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:26.300341 containerd[1446]: time="2024-09-04T17:40:26.300296506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:26.300509 containerd[1446]: time="2024-09-04T17:40:26.300476784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:26.300509 containerd[1446]: time="2024-09-04T17:40:26.300502542Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:40:26.300669 containerd[1446]: time="2024-09-04T17:40:26.300641423Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:40:26.301119 containerd[1446]: time="2024-09-04T17:40:26.300731462Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:40:26.303048 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:40:26.314600 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:40:26.323694 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:40:26.323957 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:40:26.330601 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:40:26.350934 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:40:26.359971 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:40:26.376113 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:40:26.378672 tar[1442]: linux-amd64/LICENSE Sep 4 17:40:26.378672 tar[1442]: linux-amd64/README.md Sep 4 17:40:26.378203 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:40:26.393961 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:40:26.435876 containerd[1446]: time="2024-09-04T17:40:26.435818281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:40:26.435937 containerd[1446]: time="2024-09-04T17:40:26.435899964Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:40:26.435937 containerd[1446]: time="2024-09-04T17:40:26.435930301Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:40:26.435991 containerd[1446]: time="2024-09-04T17:40:26.435950249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:40:26.435991 containerd[1446]: time="2024-09-04T17:40:26.435976157Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:40:26.436224 containerd[1446]: time="2024-09-04T17:40:26.436186151Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:40:26.436656 containerd[1446]: time="2024-09-04T17:40:26.436607833Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:40:26.436816 containerd[1446]: time="2024-09-04T17:40:26.436785096Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:40:26.436816 containerd[1446]: time="2024-09-04T17:40:26.436811505Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:40:26.436882 containerd[1446]: time="2024-09-04T17:40:26.436830050Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:40:26.436882 containerd[1446]: time="2024-09-04T17:40:26.436850288Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:40:26.436882 containerd[1446]: time="2024-09-04T17:40:26.436868011Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:40:26.436959 containerd[1446]: time="2024-09-04T17:40:26.436886376Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:40:26.436959 containerd[1446]: time="2024-09-04T17:40:26.436906584Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:40:26.436959 containerd[1446]: time="2024-09-04T17:40:26.436926240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:40:26.436959 containerd[1446]: time="2024-09-04T17:40:26.436944254Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:40:26.437080 containerd[1446]: time="2024-09-04T17:40:26.436965945Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:40:26.437080 containerd[1446]: time="2024-09-04T17:40:26.436992084Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:40:26.437080 containerd[1446]: time="2024-09-04T17:40:26.437026899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437080 containerd[1446]: time="2024-09-04T17:40:26.437067906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437197 containerd[1446]: time="2024-09-04T17:40:26.437092202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437197 containerd[1446]: time="2024-09-04T17:40:26.437109635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437197 containerd[1446]: time="2024-09-04T17:40:26.437129292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437197 containerd[1446]: time="2024-09-04T17:40:26.437146534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437197 containerd[1446]: time="2024-09-04T17:40:26.437164237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437197 containerd[1446]: time="2024-09-04T17:40:26.437181239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437197 containerd[1446]: time="2024-09-04T17:40:26.437198231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437423 containerd[1446]: time="2024-09-04T17:40:26.437221805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437423 containerd[1446]: time="2024-09-04T17:40:26.437237655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437423 containerd[1446]: time="2024-09-04T17:40:26.437253655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437423 containerd[1446]: time="2024-09-04T17:40:26.437269234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437423 containerd[1446]: time="2024-09-04T17:40:26.437289753Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:40:26.437423 containerd[1446]: time="2024-09-04T17:40:26.437340969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437423 containerd[1446]: time="2024-09-04T17:40:26.437367819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437423 containerd[1446]: time="2024-09-04T17:40:26.437383879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:40:26.437641 containerd[1446]: time="2024-09-04T17:40:26.437463729Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:40:26.437641 containerd[1446]: time="2024-09-04T17:40:26.437488826Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:40:26.437641 containerd[1446]: time="2024-09-04T17:40:26.437506089Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:40:26.437641 containerd[1446]: time="2024-09-04T17:40:26.437523161Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:40:26.437641 containerd[1446]: time="2024-09-04T17:40:26.437537357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.437641 containerd[1446]: time="2024-09-04T17:40:26.437554149Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:40:26.437641 containerd[1446]: time="2024-09-04T17:40:26.437575429Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:40:26.437641 containerd[1446]: time="2024-09-04T17:40:26.437590497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:40:26.438025 containerd[1446]: time="2024-09-04T17:40:26.437958428Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:40:26.438199 containerd[1446]: time="2024-09-04T17:40:26.438038227Z" level=info msg="Connect containerd service" Sep 4 17:40:26.438199 containerd[1446]: time="2024-09-04T17:40:26.438081438Z" level=info msg="using legacy CRI server" Sep 4 17:40:26.438199 containerd[1446]: time="2024-09-04T17:40:26.438092309Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:40:26.438338 containerd[1446]: time="2024-09-04T17:40:26.438297354Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:40:26.439069 containerd[1446]: time="2024-09-04T17:40:26.439032784Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:40:26.439263 containerd[1446]: time="2024-09-04T17:40:26.439208213Z" level=info msg="Start subscribing containerd event" Sep 4 17:40:26.439329 containerd[1446]: time="2024-09-04T17:40:26.439293533Z" level=info msg="Start recovering state" Sep 4 17:40:26.439525 containerd[1446]: time="2024-09-04T17:40:26.439493909Z" level=info msg="Start event monitor" Sep 4 17:40:26.439525 containerd[1446]: time="2024-09-04T17:40:26.439496685Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:40:26.439525 containerd[1446]: time="2024-09-04T17:40:26.439540857Z" level=info msg="Start snapshots syncer" Sep 4 17:40:26.439642 containerd[1446]: time="2024-09-04T17:40:26.439559933Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:40:26.439642 containerd[1446]: time="2024-09-04T17:40:26.439568960Z" level=info msg="Start streaming server" Sep 4 17:40:26.439642 containerd[1446]: time="2024-09-04T17:40:26.439599958Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:40:26.440422 containerd[1446]: time="2024-09-04T17:40:26.439685659Z" level=info msg="containerd successfully booted in 0.228829s" Sep 4 17:40:26.439766 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:40:26.450427 systemd-networkd[1375]: eth0: Gained IPv6LL Sep 4 17:40:26.454007 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:40:26.455919 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:40:26.475724 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:40:26.478749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:26.503977 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:40:26.527154 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:40:26.527490 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:40:26.534949 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:40:26.537473 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:40:27.536185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:27.537953 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:40:27.539156 systemd[1]: Startup finished in 1.071s (kernel) + 5.880s (initrd) + 4.332s (userspace) = 11.284s. Sep 4 17:40:27.541598 (kubelet)[1531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:40:28.410664 kubelet[1531]: E0904 17:40:28.410476 1531 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:40:28.415678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:40:28.415880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:40:28.416374 systemd[1]: kubelet.service: Consumed 1.780s CPU time. Sep 4 17:40:30.758647 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:40:30.760232 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:58700.service - OpenSSH per-connection server daemon (10.0.0.1:58700). Sep 4 17:40:30.810630 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 58700 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:40:30.813386 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:40:30.825452 systemd-logind[1427]: New session 1 of user core. Sep 4 17:40:30.827296 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:40:30.846348 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:40:30.860901 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:40:30.877817 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:40:30.882266 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:40:31.002867 systemd[1549]: Queued start job for default target default.target. Sep 4 17:40:31.012682 systemd[1549]: Created slice app.slice - User Application Slice. Sep 4 17:40:31.012708 systemd[1549]: Reached target paths.target - Paths. Sep 4 17:40:31.012721 systemd[1549]: Reached target timers.target - Timers. Sep 4 17:40:31.014405 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:40:31.026789 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:40:31.026948 systemd[1549]: Reached target sockets.target - Sockets. Sep 4 17:40:31.026969 systemd[1549]: Reached target basic.target - Basic System. Sep 4 17:40:31.027014 systemd[1549]: Reached target default.target - Main User Target. Sep 4 17:40:31.027054 systemd[1549]: Startup finished in 134ms. Sep 4 17:40:31.027730 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:40:31.029522 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:40:31.093219 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:58710.service - OpenSSH per-connection server daemon (10.0.0.1:58710). Sep 4 17:40:31.133346 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 58710 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:40:31.135162 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:40:31.139634 systemd-logind[1427]: New session 2 of user core. Sep 4 17:40:31.147507 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:40:31.202616 sshd[1560]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:31.212209 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:58710.service: Deactivated successfully. Sep 4 17:40:31.214185 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:40:31.215958 systemd-logind[1427]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:40:31.222717 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:58722.service - OpenSSH per-connection server daemon (10.0.0.1:58722). Sep 4 17:40:31.223576 systemd-logind[1427]: Removed session 2. Sep 4 17:40:31.253522 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 58722 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:40:31.255111 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:40:31.259157 systemd-logind[1427]: New session 3 of user core. Sep 4 17:40:31.269775 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:40:31.321601 sshd[1567]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:31.331553 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:58722.service: Deactivated successfully. Sep 4 17:40:31.333569 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:40:31.334976 systemd-logind[1427]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:40:31.336216 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:58730.service - OpenSSH per-connection server daemon (10.0.0.1:58730). Sep 4 17:40:31.337044 systemd-logind[1427]: Removed session 3. Sep 4 17:40:31.373499 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 58730 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:40:31.375333 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:40:31.380139 systemd-logind[1427]: New session 4 of user core. Sep 4 17:40:31.389436 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:40:31.446122 sshd[1575]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:31.454142 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:58730.service: Deactivated successfully. Sep 4 17:40:31.456058 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:40:31.458213 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:40:31.470567 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:58732.service - OpenSSH per-connection server daemon (10.0.0.1:58732). Sep 4 17:40:31.471806 systemd-logind[1427]: Removed session 4. Sep 4 17:40:31.502227 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 58732 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:40:31.503863 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:40:31.508885 systemd-logind[1427]: New session 5 of user core. Sep 4 17:40:31.518628 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:40:31.632526 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:40:31.632909 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:40:31.650964 sudo[1586]: pam_unix(sudo:session): session closed for user root Sep 4 17:40:31.653674 sshd[1582]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:31.679887 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:58732.service: Deactivated successfully. Sep 4 17:40:31.681937 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:40:31.684186 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:40:31.697296 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:58738.service - OpenSSH per-connection server daemon (10.0.0.1:58738). Sep 4 17:40:31.698658 systemd-logind[1427]: Removed session 5. Sep 4 17:40:31.733235 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 58738 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:40:31.735282 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:40:31.740168 systemd-logind[1427]: New session 6 of user core. Sep 4 17:40:31.750497 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:40:31.805709 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:40:31.806056 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:40:31.810250 sudo[1595]: pam_unix(sudo:session): session closed for user root Sep 4 17:40:31.817375 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:40:31.817733 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:40:31.833566 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:40:31.835223 auditctl[1598]: No rules Sep 4 17:40:31.836711 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:40:31.836986 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:40:31.838811 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:40:31.871142 augenrules[1616]: No rules Sep 4 17:40:31.872385 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:40:31.873774 sudo[1594]: pam_unix(sudo:session): session closed for user root Sep 4 17:40:31.875921 sshd[1591]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:31.883786 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:58738.service: Deactivated successfully. Sep 4 17:40:31.886341 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:40:31.887998 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:40:31.897634 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:58742.service - OpenSSH per-connection server daemon (10.0.0.1:58742). Sep 4 17:40:31.898686 systemd-logind[1427]: Removed session 6. Sep 4 17:40:31.929470 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 58742 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:40:31.931034 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:40:31.934782 systemd-logind[1427]: New session 7 of user core. Sep 4 17:40:31.944433 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:40:31.997660 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:40:31.998016 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:40:32.157595 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:40:32.157770 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:40:32.994231 dockerd[1638]: time="2024-09-04T17:40:32.994129354Z" level=info msg="Starting up" Sep 4 17:40:33.978535 dockerd[1638]: time="2024-09-04T17:40:33.978376535Z" level=info msg="Loading containers: start." Sep 4 17:40:34.168325 kernel: Initializing XFRM netlink socket Sep 4 17:40:34.260569 systemd-networkd[1375]: docker0: Link UP Sep 4 17:40:34.388044 dockerd[1638]: time="2024-09-04T17:40:34.387973160Z" level=info msg="Loading containers: done." Sep 4 17:40:34.408795 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1725827171-merged.mount: Deactivated successfully. Sep 4 17:40:34.485752 dockerd[1638]: time="2024-09-04T17:40:34.485674013Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:40:34.485905 dockerd[1638]: time="2024-09-04T17:40:34.485834735Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:40:34.486037 dockerd[1638]: time="2024-09-04T17:40:34.486009052Z" level=info msg="Daemon has completed initialization" Sep 4 17:40:34.784717 dockerd[1638]: time="2024-09-04T17:40:34.784596948Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:40:34.784981 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:40:35.794087 containerd[1446]: time="2024-09-04T17:40:35.794038142Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:40:37.753141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625154865.mount: Deactivated successfully. Sep 4 17:40:38.666156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:40:38.671461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:38.965244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:38.970146 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:40:39.138454 kubelet[1817]: E0904 17:40:39.138352 1817 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:40:39.147462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:40:39.147679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:40:40.567299 containerd[1446]: time="2024-09-04T17:40:40.567188031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:40.568114 containerd[1446]: time="2024-09-04T17:40:40.568036774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232949" Sep 4 17:40:40.570360 containerd[1446]: time="2024-09-04T17:40:40.570323095Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:40.575552 containerd[1446]: time="2024-09-04T17:40:40.575457581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:40.576623 containerd[1446]: time="2024-09-04T17:40:40.576580518Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 4.782485379s" Sep 4 17:40:40.576623 containerd[1446]: time="2024-09-04T17:40:40.576622677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 17:40:40.606953 containerd[1446]: time="2024-09-04T17:40:40.606892067Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:40:42.668886 containerd[1446]: time="2024-09-04T17:40:42.668813771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:42.669802 containerd[1446]: time="2024-09-04T17:40:42.669674126Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206206" Sep 4 17:40:42.670920 containerd[1446]: time="2024-09-04T17:40:42.670864550Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:42.673839 containerd[1446]: time="2024-09-04T17:40:42.673785582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:42.675040 containerd[1446]: time="2024-09-04T17:40:42.674991836Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 2.068052511s" Sep 4 17:40:42.675040 containerd[1446]: time="2024-09-04T17:40:42.675033174Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 17:40:42.700437 containerd[1446]: time="2024-09-04T17:40:42.700334188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:40:44.152337 containerd[1446]: time="2024-09-04T17:40:44.152225206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:44.154010 containerd[1446]: time="2024-09-04T17:40:44.153920047Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321507" Sep 4 17:40:44.155141 containerd[1446]: time="2024-09-04T17:40:44.155083230Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:44.158567 containerd[1446]: time="2024-09-04T17:40:44.158517826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:44.160047 containerd[1446]: time="2024-09-04T17:40:44.160008404Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.459620555s" Sep 4 17:40:44.160113 containerd[1446]: time="2024-09-04T17:40:44.160048329Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 17:40:44.185097 containerd[1446]: time="2024-09-04T17:40:44.185047777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:40:45.194882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155997255.mount: Deactivated successfully. Sep 4 17:40:45.970274 containerd[1446]: time="2024-09-04T17:40:45.970188864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:45.971072 containerd[1446]: time="2024-09-04T17:40:45.971010085Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600380" Sep 4 17:40:45.972247 containerd[1446]: time="2024-09-04T17:40:45.972200810Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:45.974738 containerd[1446]: time="2024-09-04T17:40:45.974691134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:45.975351 containerd[1446]: time="2024-09-04T17:40:45.975298324Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 1.790201644s" Sep 4 17:40:45.975351 containerd[1446]: time="2024-09-04T17:40:45.975343779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 17:40:45.998769 containerd[1446]: time="2024-09-04T17:40:45.998679275Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:40:46.463985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953152951.mount: Deactivated successfully. Sep 4 17:40:47.657447 containerd[1446]: time="2024-09-04T17:40:47.657362337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:47.658222 containerd[1446]: time="2024-09-04T17:40:47.658170273Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 17:40:47.660933 containerd[1446]: time="2024-09-04T17:40:47.660884738Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:47.665046 containerd[1446]: time="2024-09-04T17:40:47.664992307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:47.666353 containerd[1446]: time="2024-09-04T17:40:47.666292978Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.667568508s" Sep 4 17:40:47.666417 containerd[1446]: time="2024-09-04T17:40:47.666354313Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:40:47.699815 containerd[1446]: time="2024-09-04T17:40:47.699739760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:40:48.427198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440058655.mount: Deactivated successfully. Sep 4 17:40:48.433762 containerd[1446]: time="2024-09-04T17:40:48.433647808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:48.436204 containerd[1446]: time="2024-09-04T17:40:48.436158671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:40:48.437394 containerd[1446]: time="2024-09-04T17:40:48.437365435Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:48.440088 containerd[1446]: time="2024-09-04T17:40:48.440040807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:48.440875 containerd[1446]: time="2024-09-04T17:40:48.440838644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 741.051575ms" Sep 4 17:40:48.440875 containerd[1446]: time="2024-09-04T17:40:48.440869051Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:40:48.464146 containerd[1446]: time="2024-09-04T17:40:48.463913621Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:40:48.962633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948655958.mount: Deactivated successfully. Sep 4 17:40:49.397851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:40:49.404475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:49.648976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:49.654071 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:40:49.712574 kubelet[1990]: E0904 17:40:49.712496 1990 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:40:49.718052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:40:49.718286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:40:54.717518 containerd[1446]: time="2024-09-04T17:40:54.717384454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:54.718567 containerd[1446]: time="2024-09-04T17:40:54.718505538Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:40:54.720385 containerd[1446]: time="2024-09-04T17:40:54.720277914Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:54.795939 containerd[1446]: time="2024-09-04T17:40:54.795847612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:54.797648 containerd[1446]: time="2024-09-04T17:40:54.797613617Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.333651214s" Sep 4 17:40:54.797720 containerd[1446]: time="2024-09-04T17:40:54.797647891Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:40:57.915000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:57.929628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:57.949889 systemd[1]: Reloading requested from client PID 2116 ('systemctl') (unit session-7.scope)... Sep 4 17:40:57.949912 systemd[1]: Reloading... Sep 4 17:40:58.059003 zram_generator::config[2153]: No configuration found. Sep 4 17:40:58.527841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:40:58.607516 systemd[1]: Reloading finished in 657 ms. Sep 4 17:40:58.659908 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:40:58.660044 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:40:58.660575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:58.662612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:58.832984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:58.849726 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:40:58.901973 kubelet[2202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:40:58.901973 kubelet[2202]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:40:58.901973 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:40:58.903705 kubelet[2202]: I0904 17:40:58.903515 2202 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:40:59.442012 kubelet[2202]: I0904 17:40:59.441962 2202 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:40:59.442012 kubelet[2202]: I0904 17:40:59.441999 2202 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:40:59.442294 kubelet[2202]: I0904 17:40:59.442273 2202 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:40:59.465912 kubelet[2202]: E0904 17:40:59.465845 2202 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:40:59.471072 kubelet[2202]: I0904 17:40:59.471003 2202 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:40:59.489653 kubelet[2202]: I0904 17:40:59.489606 2202 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:40:59.491797 kubelet[2202]: I0904 17:40:59.491756 2202 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:40:59.492061 kubelet[2202]: I0904 17:40:59.492030 2202 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:40:59.492171 kubelet[2202]: I0904 17:40:59.492071 2202 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:40:59.492171 kubelet[2202]: I0904 17:40:59.492084 2202 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:40:59.492265 kubelet[2202]: I0904 17:40:59.492242 2202 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:40:59.492423 kubelet[2202]: I0904 17:40:59.492402 2202 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:40:59.492445 kubelet[2202]: I0904 17:40:59.492429 2202 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:40:59.492472 kubelet[2202]: I0904 17:40:59.492465 2202 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:40:59.492503 kubelet[2202]: I0904 17:40:59.492495 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:40:59.493024 kubelet[2202]: W0904 17:40:59.492908 2202 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:40:59.493024 kubelet[2202]: W0904 17:40:59.492909 2202 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:40:59.493024 kubelet[2202]: E0904 17:40:59.492995 2202 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:40:59.493024 kubelet[2202]: E0904 17:40:59.492996 2202 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:40:59.494266 kubelet[2202]: I0904 17:40:59.494250 2202 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:40:59.498312 kubelet[2202]: I0904 17:40:59.498238 2202 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:40:59.499819 kubelet[2202]: W0904 17:40:59.499789 2202 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:40:59.500890 kubelet[2202]: I0904 17:40:59.500633 2202 server.go:1256] "Started kubelet" Sep 4 17:40:59.500890 kubelet[2202]: I0904 17:40:59.500713 2202 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:40:59.501181 kubelet[2202]: I0904 17:40:59.501151 2202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:40:59.501761 kubelet[2202]: I0904 17:40:59.501740 2202 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:40:59.501917 kubelet[2202]: I0904 17:40:59.501887 2202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:40:59.502054 kubelet[2202]: I0904 17:40:59.502025 2202 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:40:59.503784 kubelet[2202]: E0904 17:40:59.503767 2202 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:40:59.503884 kubelet[2202]: I0904 17:40:59.503874 2202 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:40:59.504081 kubelet[2202]: I0904 17:40:59.504064 2202 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:40:59.506009 kubelet[2202]: I0904 17:40:59.505976 2202 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:40:59.506486 kubelet[2202]: W0904 17:40:59.506447 2202 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:40:59.506577 kubelet[2202]: E0904 17:40:59.506565 2202 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:40:59.507546 kubelet[2202]: I0904 17:40:59.507524 2202 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:40:59.522359 kubelet[2202]: I0904 17:40:59.507696 2202 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:40:59.527888 kubelet[2202]: E0904 17:40:59.527620 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Sep 4 17:40:59.528051 kubelet[2202]: E0904 17:40:59.527900 2202 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:40:59.531332 kubelet[2202]: I0904 17:40:59.528432 2202 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:40:59.531332 kubelet[2202]: E0904 17:40:59.530126 2202 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21b5516896782 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:40:59.500595074 +0000 UTC m=+0.645808307,LastTimestamp:2024-09-04 17:40:59.500595074 +0000 UTC m=+0.645808307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:40:59.541610 kubelet[2202]: I0904 17:40:59.541563 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:40:59.543733 kubelet[2202]: I0904 17:40:59.543706 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:40:59.543800 kubelet[2202]: I0904 17:40:59.543748 2202 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:40:59.543800 kubelet[2202]: I0904 17:40:59.543781 2202 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:40:59.544124 kubelet[2202]: E0904 17:40:59.543864 2202 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:40:59.545568 kubelet[2202]: W0904 17:40:59.545498 2202 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:40:59.545633 kubelet[2202]: E0904 17:40:59.545576 2202 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:40:59.552070 kubelet[2202]: I0904 17:40:59.552044 2202 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:40:59.552070 kubelet[2202]: I0904 17:40:59.552065 2202 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:40:59.552182 kubelet[2202]: I0904 17:40:59.552087 2202 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:40:59.606031 kubelet[2202]: I0904 17:40:59.605990 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:40:59.606431 kubelet[2202]: E0904 17:40:59.606406 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Sep 4 17:40:59.644817 kubelet[2202]: E0904 17:40:59.644742 2202 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:40:59.729205 kubelet[2202]: E0904 17:40:59.729043 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Sep 4 17:40:59.808594 kubelet[2202]: I0904 17:40:59.808558 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:40:59.808946 kubelet[2202]: E0904 17:40:59.808918 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Sep 4 17:40:59.845162 kubelet[2202]: E0904 17:40:59.845085 2202 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:40:59.924066 kubelet[2202]: I0904 17:40:59.924015 2202 policy_none.go:49] "None policy: Start" Sep 4 17:40:59.924973 kubelet[2202]: I0904 17:40:59.924952 2202 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:40:59.925018 kubelet[2202]: I0904 17:40:59.924990 2202 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:41:00.046985 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:41:00.062497 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:41:00.078205 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:41:00.079428 kubelet[2202]: I0904 17:41:00.079401 2202 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:41:00.079803 kubelet[2202]: I0904 17:41:00.079784 2202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:41:00.080965 kubelet[2202]: E0904 17:41:00.080946 2202 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:41:00.130124 kubelet[2202]: E0904 17:41:00.130060 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Sep 4 17:41:00.211587 kubelet[2202]: I0904 17:41:00.211534 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:41:00.211982 kubelet[2202]: E0904 17:41:00.211943 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Sep 4 17:41:00.246341 kubelet[2202]: I0904 17:41:00.246214 2202 topology_manager.go:215] "Topology Admit Handler" podUID="b9ea2cbf4676427bfe899c5c41411c0d" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:41:00.248004 kubelet[2202]: I0904 17:41:00.247969 2202 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:41:00.249349 kubelet[2202]: I0904 17:41:00.249296 2202 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:41:00.255520 systemd[1]: Created slice kubepods-burstable-podb9ea2cbf4676427bfe899c5c41411c0d.slice - libcontainer container kubepods-burstable-podb9ea2cbf4676427bfe899c5c41411c0d.slice. Sep 4 17:41:00.275684 systemd[1]: Created slice kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice - libcontainer container kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice. Sep 4 17:41:00.292025 systemd[1]: Created slice kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice - libcontainer container kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice. Sep 4 17:41:00.309629 kubelet[2202]: I0904 17:41:00.309475 2202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9ea2cbf4676427bfe899c5c41411c0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9ea2cbf4676427bfe899c5c41411c0d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:41:00.309629 kubelet[2202]: I0904 17:41:00.309524 2202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:00.309629 kubelet[2202]: I0904 17:41:00.309548 2202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:00.309629 kubelet[2202]: I0904 17:41:00.309567 2202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:00.309629 kubelet[2202]: I0904 17:41:00.309591 2202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:41:00.310188 kubelet[2202]: I0904 17:41:00.309615 2202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9ea2cbf4676427bfe899c5c41411c0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9ea2cbf4676427bfe899c5c41411c0d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:41:00.310188 kubelet[2202]: I0904 17:41:00.309634 2202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9ea2cbf4676427bfe899c5c41411c0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9ea2cbf4676427bfe899c5c41411c0d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:41:00.310188 kubelet[2202]: I0904 17:41:00.309670 2202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:00.310188 kubelet[2202]: I0904 17:41:00.309689 2202 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:00.573659 kubelet[2202]: E0904 17:41:00.573510 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:00.574650 containerd[1446]: time="2024-09-04T17:41:00.574597893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9ea2cbf4676427bfe899c5c41411c0d,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:00.589852 kubelet[2202]: E0904 17:41:00.589810 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:00.590406 containerd[1446]: time="2024-09-04T17:41:00.590358293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:00.607474 kubelet[2202]: W0904 17:41:00.607408 2202 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:41:00.607474 kubelet[2202]: E0904 17:41:00.607472 2202 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:41:00.659791 kubelet[2202]: E0904 17:41:00.659748 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:00.660384 containerd[1446]: time="2024-09-04T17:41:00.660341317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:00.861087 kubelet[2202]: W0904 17:41:00.860925 2202 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:41:00.861087 kubelet[2202]: E0904 17:41:00.860984 2202 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:41:00.930882 kubelet[2202]: E0904 17:41:00.930816 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="1.6s" Sep 4 17:41:00.975839 kubelet[2202]: W0904 17:41:00.975776 2202 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:41:00.975839 kubelet[2202]: E0904 17:41:00.975838 2202 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:41:01.013231 kubelet[2202]: I0904 17:41:01.013196 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:41:01.013581 kubelet[2202]: E0904 17:41:01.013550 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Sep 4 17:41:01.098631 kubelet[2202]: W0904 17:41:01.098564 2202 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:41:01.098802 kubelet[2202]: E0904 17:41:01.098654 2202 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:41:01.467927 kubelet[2202]: E0904 17:41:01.467885 2202 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.49:6443: connect: connection refused Sep 4 17:41:01.490116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132273389.mount: Deactivated successfully. Sep 4 17:41:01.499513 containerd[1446]: time="2024-09-04T17:41:01.499457839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:41:01.501557 containerd[1446]: time="2024-09-04T17:41:01.501503188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:41:01.502748 containerd[1446]: time="2024-09-04T17:41:01.502690917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:41:01.503840 containerd[1446]: time="2024-09-04T17:41:01.503804014Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:41:01.505256 containerd[1446]: time="2024-09-04T17:41:01.505216102Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:41:01.506052 containerd[1446]: time="2024-09-04T17:41:01.506010440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:41:01.507876 containerd[1446]: time="2024-09-04T17:41:01.507768860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:41:01.509589 containerd[1446]: time="2024-09-04T17:41:01.509545245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:41:01.511624 containerd[1446]: time="2024-09-04T17:41:01.511576145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 936.870756ms" Sep 4 17:41:01.512399 containerd[1446]: time="2024-09-04T17:41:01.512358901Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 851.93147ms" Sep 4 17:41:01.517247 containerd[1446]: time="2024-09-04T17:41:01.517189781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 926.725715ms" Sep 4 17:41:01.783173 containerd[1446]: time="2024-09-04T17:41:01.782591212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:01.783173 containerd[1446]: time="2024-09-04T17:41:01.782652168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:01.783173 containerd[1446]: time="2024-09-04T17:41:01.782662357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:01.783173 containerd[1446]: time="2024-09-04T17:41:01.782759343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:01.785684 containerd[1446]: time="2024-09-04T17:41:01.785396151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:01.785684 containerd[1446]: time="2024-09-04T17:41:01.785516812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:01.785684 containerd[1446]: time="2024-09-04T17:41:01.785549775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:01.786330 containerd[1446]: time="2024-09-04T17:41:01.786076101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:01.786401 containerd[1446]: time="2024-09-04T17:41:01.786378148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:01.788416 containerd[1446]: time="2024-09-04T17:41:01.788150374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:01.788416 containerd[1446]: time="2024-09-04T17:41:01.788200109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:01.788416 containerd[1446]: time="2024-09-04T17:41:01.788294299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:01.823513 systemd[1]: Started cri-containerd-8bc9254e29d8994fa99bc129461a839dbafe4de190407e55da1413bfc91c09fc.scope - libcontainer container 8bc9254e29d8994fa99bc129461a839dbafe4de190407e55da1413bfc91c09fc. Sep 4 17:41:01.825733 systemd[1]: Started cri-containerd-ec7bf7984202824e46c9348f821c58aeeadfda16459d1261f36f495cb45534d9.scope - libcontainer container ec7bf7984202824e46c9348f821c58aeeadfda16459d1261f36f495cb45534d9. Sep 4 17:41:01.830634 systemd[1]: Started cri-containerd-da034ead86880f568a5b8a60c76f3321c824e5cfe92e65a8606304c8ca7f861f.scope - libcontainer container da034ead86880f568a5b8a60c76f3321c824e5cfe92e65a8606304c8ca7f861f. Sep 4 17:41:01.876247 containerd[1446]: time="2024-09-04T17:41:01.876201753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9ea2cbf4676427bfe899c5c41411c0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bc9254e29d8994fa99bc129461a839dbafe4de190407e55da1413bfc91c09fc\"" Sep 4 17:41:01.877610 kubelet[2202]: E0904 17:41:01.877583 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:01.883454 containerd[1446]: time="2024-09-04T17:41:01.883395739Z" level=info msg="CreateContainer within sandbox \"8bc9254e29d8994fa99bc129461a839dbafe4de190407e55da1413bfc91c09fc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:41:01.883591 containerd[1446]: time="2024-09-04T17:41:01.883527321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec7bf7984202824e46c9348f821c58aeeadfda16459d1261f36f495cb45534d9\"" Sep 4 17:41:01.883591 containerd[1446]: time="2024-09-04T17:41:01.883574421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"da034ead86880f568a5b8a60c76f3321c824e5cfe92e65a8606304c8ca7f861f\"" Sep 4 17:41:01.884483 kubelet[2202]: E0904 17:41:01.884441 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:01.884613 kubelet[2202]: E0904 17:41:01.884594 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:01.886354 containerd[1446]: time="2024-09-04T17:41:01.886164119Z" level=info msg="CreateContainer within sandbox \"ec7bf7984202824e46c9348f821c58aeeadfda16459d1261f36f495cb45534d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:41:01.887440 containerd[1446]: time="2024-09-04T17:41:01.887379772Z" level=info msg="CreateContainer within sandbox \"da034ead86880f568a5b8a60c76f3321c824e5cfe92e65a8606304c8ca7f861f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:41:02.264376 containerd[1446]: time="2024-09-04T17:41:02.264197147Z" level=info msg="CreateContainer within sandbox \"8bc9254e29d8994fa99bc129461a839dbafe4de190407e55da1413bfc91c09fc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fbd1f04665973186fff42afdd311613f662a020ee352a478068771b24fa510fc\"" Sep 4 17:41:02.265373 containerd[1446]: time="2024-09-04T17:41:02.265324900Z" level=info msg="StartContainer for \"fbd1f04665973186fff42afdd311613f662a020ee352a478068771b24fa510fc\"" Sep 4 17:41:02.288557 containerd[1446]: time="2024-09-04T17:41:02.288482770Z" level=info msg="CreateContainer within sandbox \"ec7bf7984202824e46c9348f821c58aeeadfda16459d1261f36f495cb45534d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0291eaa09c3f5c4c52f31a64e8fd42325591925adf0d3378e65a3b97b4f453f7\"" Sep 4 17:41:02.289355 containerd[1446]: time="2024-09-04T17:41:02.289281876Z" level=info msg="StartContainer for \"0291eaa09c3f5c4c52f31a64e8fd42325591925adf0d3378e65a3b97b4f453f7\"" Sep 4 17:41:02.291785 containerd[1446]: time="2024-09-04T17:41:02.291746230Z" level=info msg="CreateContainer within sandbox \"da034ead86880f568a5b8a60c76f3321c824e5cfe92e65a8606304c8ca7f861f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a13359eb2ada065b409b573346b7fbdef5b8fa1fa287fe21cc40b735e8f288e1\"" Sep 4 17:41:02.292209 containerd[1446]: time="2024-09-04T17:41:02.292177282Z" level=info msg="StartContainer for \"a13359eb2ada065b409b573346b7fbdef5b8fa1fa287fe21cc40b735e8f288e1\"" Sep 4 17:41:02.297586 systemd[1]: Started cri-containerd-fbd1f04665973186fff42afdd311613f662a020ee352a478068771b24fa510fc.scope - libcontainer container fbd1f04665973186fff42afdd311613f662a020ee352a478068771b24fa510fc. Sep 4 17:41:02.355596 systemd[1]: Started cri-containerd-a13359eb2ada065b409b573346b7fbdef5b8fa1fa287fe21cc40b735e8f288e1.scope - libcontainer container a13359eb2ada065b409b573346b7fbdef5b8fa1fa287fe21cc40b735e8f288e1. Sep 4 17:41:02.364910 systemd[1]: Started cri-containerd-0291eaa09c3f5c4c52f31a64e8fd42325591925adf0d3378e65a3b97b4f453f7.scope - libcontainer container 0291eaa09c3f5c4c52f31a64e8fd42325591925adf0d3378e65a3b97b4f453f7. Sep 4 17:41:02.520029 containerd[1446]: time="2024-09-04T17:41:02.519844586Z" level=info msg="StartContainer for \"a13359eb2ada065b409b573346b7fbdef5b8fa1fa287fe21cc40b735e8f288e1\" returns successfully" Sep 4 17:41:02.520678 containerd[1446]: time="2024-09-04T17:41:02.520374778Z" level=info msg="StartContainer for \"0291eaa09c3f5c4c52f31a64e8fd42325591925adf0d3378e65a3b97b4f453f7\" returns successfully" Sep 4 17:41:02.520678 containerd[1446]: time="2024-09-04T17:41:02.520421106Z" level=info msg="StartContainer for \"fbd1f04665973186fff42afdd311613f662a020ee352a478068771b24fa510fc\" returns successfully" Sep 4 17:41:02.568330 kubelet[2202]: E0904 17:41:02.567509 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:02.570937 kubelet[2202]: E0904 17:41:02.570912 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:02.573953 kubelet[2202]: E0904 17:41:02.573931 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:02.617843 kubelet[2202]: I0904 17:41:02.617786 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:41:03.575368 kubelet[2202]: E0904 17:41:03.575331 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:03.878077 kubelet[2202]: E0904 17:41:03.877951 2202 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:41:03.957575 kubelet[2202]: I0904 17:41:03.957519 2202 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:41:04.146710 kubelet[2202]: E0904 17:41:04.146558 2202 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 17:41:04.147035 kubelet[2202]: E0904 17:41:04.146897 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:04.495407 kubelet[2202]: I0904 17:41:04.495275 2202 apiserver.go:52] "Watching apiserver" Sep 4 17:41:04.506360 kubelet[2202]: I0904 17:41:04.506322 2202 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:41:06.497985 kubelet[2202]: E0904 17:41:06.497927 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:06.579401 kubelet[2202]: E0904 17:41:06.579357 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:06.647958 systemd[1]: Reloading requested from client PID 2483 ('systemctl') (unit session-7.scope)... Sep 4 17:41:06.647977 systemd[1]: Reloading... Sep 4 17:41:06.792350 zram_generator::config[2523]: No configuration found. Sep 4 17:41:06.928890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:41:07.039737 systemd[1]: Reloading finished in 391 ms. Sep 4 17:41:07.088476 kubelet[2202]: I0904 17:41:07.088197 2202 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:41:07.088298 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:41:07.110216 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:41:07.110606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:41:07.110671 systemd[1]: kubelet.service: Consumed 1.379s CPU time, 116.4M memory peak, 0B memory swap peak. Sep 4 17:41:07.123518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:41:07.272235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:41:07.284616 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:41:07.373761 kubelet[2565]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:41:07.373761 kubelet[2565]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:41:07.373761 kubelet[2565]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:41:07.374187 kubelet[2565]: I0904 17:41:07.373819 2565 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:41:07.378556 kubelet[2565]: I0904 17:41:07.378525 2565 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:41:07.378556 kubelet[2565]: I0904 17:41:07.378548 2565 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:41:07.378752 kubelet[2565]: I0904 17:41:07.378728 2565 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:41:07.380209 kubelet[2565]: I0904 17:41:07.380185 2565 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:41:07.382364 kubelet[2565]: I0904 17:41:07.382290 2565 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:41:07.391295 kubelet[2565]: I0904 17:41:07.391260 2565 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:41:07.391550 kubelet[2565]: I0904 17:41:07.391531 2565 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:41:07.391718 kubelet[2565]: I0904 17:41:07.391697 2565 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:41:07.391816 kubelet[2565]: I0904 17:41:07.391727 2565 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:41:07.391816 kubelet[2565]: I0904 17:41:07.391736 2565 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:41:07.391816 kubelet[2565]: I0904 17:41:07.391767 2565 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:41:07.391884 kubelet[2565]: I0904 17:41:07.391860 2565 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:41:07.391884 kubelet[2565]: I0904 17:41:07.391883 2565 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:41:07.391931 kubelet[2565]: I0904 17:41:07.391909 2565 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:41:07.391931 kubelet[2565]: I0904 17:41:07.391923 2565 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:41:07.393224 kubelet[2565]: I0904 17:41:07.392833 2565 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:41:07.393294 kubelet[2565]: I0904 17:41:07.393244 2565 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:41:07.396624 kubelet[2565]: I0904 17:41:07.393740 2565 server.go:1256] "Started kubelet" Sep 4 17:41:07.396624 kubelet[2565]: I0904 17:41:07.393973 2565 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:41:07.396624 kubelet[2565]: I0904 17:41:07.394008 2565 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:41:07.396624 kubelet[2565]: I0904 17:41:07.394368 2565 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:41:07.396844 kubelet[2565]: I0904 17:41:07.396682 2565 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:41:07.400281 kubelet[2565]: I0904 17:41:07.398659 2565 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:41:07.403833 kubelet[2565]: I0904 17:41:07.403809 2565 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:41:07.404189 kubelet[2565]: I0904 17:41:07.404166 2565 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:41:07.404413 kubelet[2565]: I0904 17:41:07.404395 2565 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:41:07.405905 kubelet[2565]: I0904 17:41:07.404856 2565 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:41:07.405905 kubelet[2565]: I0904 17:41:07.405024 2565 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:41:07.405905 kubelet[2565]: E0904 17:41:07.405739 2565 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:41:07.406621 kubelet[2565]: I0904 17:41:07.406403 2565 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:41:07.411699 kubelet[2565]: I0904 17:41:07.411648 2565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:41:07.413458 kubelet[2565]: I0904 17:41:07.413433 2565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:41:07.413541 kubelet[2565]: I0904 17:41:07.413483 2565 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:41:07.413541 kubelet[2565]: I0904 17:41:07.413508 2565 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:41:07.413589 kubelet[2565]: E0904 17:41:07.413571 2565 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:41:07.440909 kubelet[2565]: I0904 17:41:07.440871 2565 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:41:07.440909 kubelet[2565]: I0904 17:41:07.440900 2565 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:41:07.440909 kubelet[2565]: I0904 17:41:07.440921 2565 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:41:07.441115 kubelet[2565]: I0904 17:41:07.441101 2565 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:41:07.441140 kubelet[2565]: I0904 17:41:07.441128 2565 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:41:07.441140 kubelet[2565]: I0904 17:41:07.441135 2565 policy_none.go:49] "None policy: Start" Sep 4 17:41:07.441782 kubelet[2565]: I0904 17:41:07.441737 2565 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:41:07.441782 kubelet[2565]: I0904 17:41:07.441762 2565 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:41:07.441920 kubelet[2565]: I0904 17:41:07.441906 2565 state_mem.go:75] "Updated machine memory state" Sep 4 17:41:07.447518 kubelet[2565]: I0904 17:41:07.447461 2565 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:41:07.447831 kubelet[2565]: I0904 17:41:07.447740 2565 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:41:07.509900 kubelet[2565]: I0904 17:41:07.509858 2565 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:41:07.514031 kubelet[2565]: I0904 17:41:07.514001 2565 topology_manager.go:215] "Topology Admit Handler" podUID="b9ea2cbf4676427bfe899c5c41411c0d" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:41:07.514121 kubelet[2565]: I0904 17:41:07.514096 2565 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:41:07.514148 kubelet[2565]: I0904 17:41:07.514137 2565 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:41:07.605774 kubelet[2565]: I0904 17:41:07.605062 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:07.605774 kubelet[2565]: I0904 17:41:07.605114 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:07.605774 kubelet[2565]: I0904 17:41:07.605144 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:07.605774 kubelet[2565]: I0904 17:41:07.605169 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:07.605774 kubelet[2565]: I0904 17:41:07.605190 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9ea2cbf4676427bfe899c5c41411c0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9ea2cbf4676427bfe899c5c41411c0d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:41:07.606203 kubelet[2565]: I0904 17:41:07.605284 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:07.606203 kubelet[2565]: I0904 17:41:07.605389 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:41:07.606203 kubelet[2565]: I0904 17:41:07.605412 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9ea2cbf4676427bfe899c5c41411c0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9ea2cbf4676427bfe899c5c41411c0d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:41:07.606203 kubelet[2565]: I0904 17:41:07.605439 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9ea2cbf4676427bfe899c5c41411c0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9ea2cbf4676427bfe899c5c41411c0d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:41:07.919569 kubelet[2565]: E0904 17:41:07.918250 2565 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 17:41:07.919569 kubelet[2565]: E0904 17:41:07.919261 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:07.919716 kubelet[2565]: E0904 17:41:07.919590 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:07.919742 kubelet[2565]: E0904 17:41:07.919715 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:07.967673 kubelet[2565]: I0904 17:41:07.967627 2565 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:41:07.967817 kubelet[2565]: I0904 17:41:07.967720 2565 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:41:08.392416 kubelet[2565]: I0904 17:41:08.392247 2565 apiserver.go:52] "Watching apiserver" Sep 4 17:41:08.404416 kubelet[2565]: I0904 17:41:08.404381 2565 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:41:08.422337 kubelet[2565]: I0904 17:41:08.421695 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.421634594 podStartE2EDuration="1.421634594s" podCreationTimestamp="2024-09-04 17:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:08.417962562 +0000 UTC m=+1.128307912" watchObservedRunningTime="2024-09-04 17:41:08.421634594 +0000 UTC m=+1.131979924" Sep 4 17:41:08.422635 kubelet[2565]: E0904 17:41:08.422612 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:08.423342 kubelet[2565]: E0904 17:41:08.423235 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:08.424552 kubelet[2565]: E0904 17:41:08.424525 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:08.426209 kubelet[2565]: I0904 17:41:08.426180 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.426144304 podStartE2EDuration="1.426144304s" podCreationTimestamp="2024-09-04 17:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:08.424144238 +0000 UTC m=+1.134489568" watchObservedRunningTime="2024-09-04 17:41:08.426144304 +0000 UTC m=+1.136489634" Sep 4 17:41:08.437250 kubelet[2565]: I0904 17:41:08.437209 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.4371596970000002 podStartE2EDuration="2.437159697s" podCreationTimestamp="2024-09-04 17:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:08.436709583 +0000 UTC m=+1.147054913" watchObservedRunningTime="2024-09-04 17:41:08.437159697 +0000 UTC m=+1.147505027" Sep 4 17:41:09.424869 kubelet[2565]: E0904 17:41:09.424813 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:10.426217 kubelet[2565]: E0904 17:41:10.426182 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:11.258425 update_engine[1428]: I0904 17:41:11.258368 1428 update_attempter.cc:509] Updating boot flags... Sep 4 17:41:11.291335 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2648) Sep 4 17:41:11.337342 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2647) Sep 4 17:41:11.365343 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2647) Sep 4 17:41:12.652548 sudo[1628]: pam_unix(sudo:session): session closed for user root Sep 4 17:41:12.659482 sshd[1624]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:12.665029 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:58742.service: Deactivated successfully. Sep 4 17:41:12.667007 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:41:12.667203 systemd[1]: session-7.scope: Consumed 5.824s CPU time, 141.7M memory peak, 0B memory swap peak. Sep 4 17:41:12.667663 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:41:12.668688 systemd-logind[1427]: Removed session 7. Sep 4 17:41:13.959283 kubelet[2565]: E0904 17:41:13.959228 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:14.430582 kubelet[2565]: E0904 17:41:14.430449 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:14.682283 kubelet[2565]: E0904 17:41:14.682111 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:15.432531 kubelet[2565]: E0904 17:41:15.432474 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:18.411051 kubelet[2565]: E0904 17:41:18.410823 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:18.476738 kubelet[2565]: E0904 17:41:18.476706 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:21.394881 kubelet[2565]: I0904 17:41:21.394837 2565 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:41:21.395557 containerd[1446]: time="2024-09-04T17:41:21.395278499Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:41:21.395913 kubelet[2565]: I0904 17:41:21.395622 2565 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:41:21.444910 kubelet[2565]: I0904 17:41:21.444854 2565 topology_manager.go:215] "Topology Admit Handler" podUID="89a164b0-8c66-4f66-bab9-2dc6d77e3a34" podNamespace="kube-system" podName="kube-proxy-59lvc" Sep 4 17:41:21.455075 systemd[1]: Created slice kubepods-besteffort-pod89a164b0_8c66_4f66_bab9_2dc6d77e3a34.slice - libcontainer container kubepods-besteffort-pod89a164b0_8c66_4f66_bab9_2dc6d77e3a34.slice. Sep 4 17:41:21.632804 kubelet[2565]: I0904 17:41:21.632711 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89a164b0-8c66-4f66-bab9-2dc6d77e3a34-kube-proxy\") pod \"kube-proxy-59lvc\" (UID: \"89a164b0-8c66-4f66-bab9-2dc6d77e3a34\") " pod="kube-system/kube-proxy-59lvc" Sep 4 17:41:21.632804 kubelet[2565]: I0904 17:41:21.632816 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4v8\" (UniqueName: \"kubernetes.io/projected/89a164b0-8c66-4f66-bab9-2dc6d77e3a34-kube-api-access-kb4v8\") pod \"kube-proxy-59lvc\" (UID: \"89a164b0-8c66-4f66-bab9-2dc6d77e3a34\") " pod="kube-system/kube-proxy-59lvc" Sep 4 17:41:21.633067 kubelet[2565]: I0904 17:41:21.632844 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89a164b0-8c66-4f66-bab9-2dc6d77e3a34-lib-modules\") pod \"kube-proxy-59lvc\" (UID: \"89a164b0-8c66-4f66-bab9-2dc6d77e3a34\") " pod="kube-system/kube-proxy-59lvc" Sep 4 17:41:21.633067 kubelet[2565]: I0904 17:41:21.632882 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89a164b0-8c66-4f66-bab9-2dc6d77e3a34-xtables-lock\") pod \"kube-proxy-59lvc\" (UID: \"89a164b0-8c66-4f66-bab9-2dc6d77e3a34\") " pod="kube-system/kube-proxy-59lvc" Sep 4 17:41:21.740044 kubelet[2565]: E0904 17:41:21.739895 2565 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 17:41:21.740044 kubelet[2565]: E0904 17:41:21.739943 2565 projected.go:200] Error preparing data for projected volume kube-api-access-kb4v8 for pod kube-system/kube-proxy-59lvc: configmap "kube-root-ca.crt" not found Sep 4 17:41:21.740044 kubelet[2565]: E0904 17:41:21.740022 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89a164b0-8c66-4f66-bab9-2dc6d77e3a34-kube-api-access-kb4v8 podName:89a164b0-8c66-4f66-bab9-2dc6d77e3a34 nodeName:}" failed. No retries permitted until 2024-09-04 17:41:22.239996627 +0000 UTC m=+14.950341957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kb4v8" (UniqueName: "kubernetes.io/projected/89a164b0-8c66-4f66-bab9-2dc6d77e3a34-kube-api-access-kb4v8") pod "kube-proxy-59lvc" (UID: "89a164b0-8c66-4f66-bab9-2dc6d77e3a34") : configmap "kube-root-ca.crt" not found Sep 4 17:41:22.337265 kubelet[2565]: E0904 17:41:22.337215 2565 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 17:41:22.337265 kubelet[2565]: E0904 17:41:22.337244 2565 projected.go:200] Error preparing data for projected volume kube-api-access-kb4v8 for pod kube-system/kube-proxy-59lvc: configmap "kube-root-ca.crt" not found Sep 4 17:41:22.337476 kubelet[2565]: E0904 17:41:22.337289 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89a164b0-8c66-4f66-bab9-2dc6d77e3a34-kube-api-access-kb4v8 podName:89a164b0-8c66-4f66-bab9-2dc6d77e3a34 nodeName:}" failed. No retries permitted until 2024-09-04 17:41:23.337275413 +0000 UTC m=+16.047620743 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kb4v8" (UniqueName: "kubernetes.io/projected/89a164b0-8c66-4f66-bab9-2dc6d77e3a34-kube-api-access-kb4v8") pod "kube-proxy-59lvc" (UID: "89a164b0-8c66-4f66-bab9-2dc6d77e3a34") : configmap "kube-root-ca.crt" not found Sep 4 17:41:22.555748 kubelet[2565]: I0904 17:41:22.555683 2565 topology_manager.go:215] "Topology Admit Handler" podUID="9d4a4429-3232-4397-85f3-ef9b0a2d272c" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-9pd9j" Sep 4 17:41:22.571451 systemd[1]: Created slice kubepods-besteffort-pod9d4a4429_3232_4397_85f3_ef9b0a2d272c.slice - libcontainer container kubepods-besteffort-pod9d4a4429_3232_4397_85f3_ef9b0a2d272c.slice. Sep 4 17:41:22.740677 kubelet[2565]: I0904 17:41:22.740640 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7pwn\" (UniqueName: \"kubernetes.io/projected/9d4a4429-3232-4397-85f3-ef9b0a2d272c-kube-api-access-p7pwn\") pod \"tigera-operator-5d56685c77-9pd9j\" (UID: \"9d4a4429-3232-4397-85f3-ef9b0a2d272c\") " pod="tigera-operator/tigera-operator-5d56685c77-9pd9j" Sep 4 17:41:22.740677 kubelet[2565]: I0904 17:41:22.740686 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d4a4429-3232-4397-85f3-ef9b0a2d272c-var-lib-calico\") pod \"tigera-operator-5d56685c77-9pd9j\" (UID: \"9d4a4429-3232-4397-85f3-ef9b0a2d272c\") " pod="tigera-operator/tigera-operator-5d56685c77-9pd9j" Sep 4 17:41:22.876085 containerd[1446]: time="2024-09-04T17:41:22.876023124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-9pd9j,Uid:9d4a4429-3232-4397-85f3-ef9b0a2d272c,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:41:22.904473 containerd[1446]: time="2024-09-04T17:41:22.904347802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:22.904473 containerd[1446]: time="2024-09-04T17:41:22.904407484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:22.904473 containerd[1446]: time="2024-09-04T17:41:22.904418795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:22.904714 containerd[1446]: time="2024-09-04T17:41:22.904496191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:22.926456 systemd[1]: Started cri-containerd-98675ea07fab0c61e8a2cd619a928db4a6e3c00bbf4fbb7aef2799c1d1c7ea25.scope - libcontainer container 98675ea07fab0c61e8a2cd619a928db4a6e3c00bbf4fbb7aef2799c1d1c7ea25. Sep 4 17:41:22.962879 containerd[1446]: time="2024-09-04T17:41:22.962828781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-9pd9j,Uid:9d4a4429-3232-4397-85f3-ef9b0a2d272c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"98675ea07fab0c61e8a2cd619a928db4a6e3c00bbf4fbb7aef2799c1d1c7ea25\"" Sep 4 17:41:22.964866 containerd[1446]: time="2024-09-04T17:41:22.964659182Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:41:23.566003 kubelet[2565]: E0904 17:41:23.565937 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:23.566536 containerd[1446]: time="2024-09-04T17:41:23.566343906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59lvc,Uid:89a164b0-8c66-4f66-bab9-2dc6d77e3a34,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:23.687992 containerd[1446]: time="2024-09-04T17:41:23.687855747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:23.687992 containerd[1446]: time="2024-09-04T17:41:23.687935738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:23.687992 containerd[1446]: time="2024-09-04T17:41:23.687955174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:23.688216 containerd[1446]: time="2024-09-04T17:41:23.688103133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:23.708441 systemd[1]: Started cri-containerd-0f5c73c285635aed4c6e6923cca55734388575794d45640feed0d6ccec614632.scope - libcontainer container 0f5c73c285635aed4c6e6923cca55734388575794d45640feed0d6ccec614632. Sep 4 17:41:23.731054 containerd[1446]: time="2024-09-04T17:41:23.730972051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59lvc,Uid:89a164b0-8c66-4f66-bab9-2dc6d77e3a34,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f5c73c285635aed4c6e6923cca55734388575794d45640feed0d6ccec614632\"" Sep 4 17:41:23.732419 kubelet[2565]: E0904 17:41:23.732205 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:23.734569 containerd[1446]: time="2024-09-04T17:41:23.734523756Z" level=info msg="CreateContainer within sandbox \"0f5c73c285635aed4c6e6923cca55734388575794d45640feed0d6ccec614632\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:41:23.757661 containerd[1446]: time="2024-09-04T17:41:23.757599815Z" level=info msg="CreateContainer within sandbox \"0f5c73c285635aed4c6e6923cca55734388575794d45640feed0d6ccec614632\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"838a41c20cc26939f19ec3990cd0ab45d65b498821100eef88214862f8511435\"" Sep 4 17:41:23.758354 containerd[1446]: time="2024-09-04T17:41:23.758316355Z" level=info msg="StartContainer for \"838a41c20cc26939f19ec3990cd0ab45d65b498821100eef88214862f8511435\"" Sep 4 17:41:23.792579 systemd[1]: Started cri-containerd-838a41c20cc26939f19ec3990cd0ab45d65b498821100eef88214862f8511435.scope - libcontainer container 838a41c20cc26939f19ec3990cd0ab45d65b498821100eef88214862f8511435. Sep 4 17:41:23.949396 containerd[1446]: time="2024-09-04T17:41:23.949338655Z" level=info msg="StartContainer for \"838a41c20cc26939f19ec3990cd0ab45d65b498821100eef88214862f8511435\" returns successfully" Sep 4 17:41:24.451296 kubelet[2565]: E0904 17:41:24.451027 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:24.461665 kubelet[2565]: I0904 17:41:24.461615 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-59lvc" podStartSLOduration=3.461565321 podStartE2EDuration="3.461565321s" podCreationTimestamp="2024-09-04 17:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:24.461451937 +0000 UTC m=+17.171797267" watchObservedRunningTime="2024-09-04 17:41:24.461565321 +0000 UTC m=+17.171910651" Sep 4 17:41:24.502926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2423591180.mount: Deactivated successfully. Sep 4 17:41:24.844387 containerd[1446]: time="2024-09-04T17:41:24.844216115Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:24.846647 containerd[1446]: time="2024-09-04T17:41:24.846599737Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136533" Sep 4 17:41:24.847819 containerd[1446]: time="2024-09-04T17:41:24.847777166Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:24.850836 containerd[1446]: time="2024-09-04T17:41:24.850776909Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:24.851696 containerd[1446]: time="2024-09-04T17:41:24.851651948Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.886961507s" Sep 4 17:41:24.851770 containerd[1446]: time="2024-09-04T17:41:24.851699728Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:41:24.853561 containerd[1446]: time="2024-09-04T17:41:24.853526631Z" level=info msg="CreateContainer within sandbox \"98675ea07fab0c61e8a2cd619a928db4a6e3c00bbf4fbb7aef2799c1d1c7ea25\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:41:24.871293 containerd[1446]: time="2024-09-04T17:41:24.871233960Z" level=info msg="CreateContainer within sandbox \"98675ea07fab0c61e8a2cd619a928db4a6e3c00bbf4fbb7aef2799c1d1c7ea25\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b9d34d595794929f92a721854d2db64a3ff8ecb340575575f08a1761d0077ad0\"" Sep 4 17:41:24.871834 containerd[1446]: time="2024-09-04T17:41:24.871793885Z" level=info msg="StartContainer for \"b9d34d595794929f92a721854d2db64a3ff8ecb340575575f08a1761d0077ad0\"" Sep 4 17:41:24.907606 systemd[1]: Started cri-containerd-b9d34d595794929f92a721854d2db64a3ff8ecb340575575f08a1761d0077ad0.scope - libcontainer container b9d34d595794929f92a721854d2db64a3ff8ecb340575575f08a1761d0077ad0. Sep 4 17:41:24.946564 containerd[1446]: time="2024-09-04T17:41:24.946447219Z" level=info msg="StartContainer for \"b9d34d595794929f92a721854d2db64a3ff8ecb340575575f08a1761d0077ad0\" returns successfully" Sep 4 17:41:27.712737 kubelet[2565]: I0904 17:41:27.712685 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-9pd9j" podStartSLOduration=3.8247687949999998 podStartE2EDuration="5.712628475s" podCreationTimestamp="2024-09-04 17:41:22 +0000 UTC" firstStartedPulling="2024-09-04 17:41:22.964261422 +0000 UTC m=+15.674606752" lastFinishedPulling="2024-09-04 17:41:24.852121102 +0000 UTC m=+17.562466432" observedRunningTime="2024-09-04 17:41:25.462497402 +0000 UTC m=+18.172842732" watchObservedRunningTime="2024-09-04 17:41:27.712628475 +0000 UTC m=+20.422973806" Sep 4 17:41:27.713432 kubelet[2565]: I0904 17:41:27.712953 2565 topology_manager.go:215] "Topology Admit Handler" podUID="071fe88c-2cad-4860-bad5-d35ab9417623" podNamespace="calico-system" podName="calico-typha-5b4bc9549b-9c5dd" Sep 4 17:41:27.725065 systemd[1]: Created slice kubepods-besteffort-pod071fe88c_2cad_4860_bad5_d35ab9417623.slice - libcontainer container kubepods-besteffort-pod071fe88c_2cad_4860_bad5_d35ab9417623.slice. Sep 4 17:41:27.789653 kubelet[2565]: I0904 17:41:27.789577 2565 topology_manager.go:215] "Topology Admit Handler" podUID="8e60b79c-a2aa-4f25-9943-1f47835f7ac5" podNamespace="calico-system" podName="calico-node-p9f9k" Sep 4 17:41:27.800260 systemd[1]: Created slice kubepods-besteffort-pod8e60b79c_a2aa_4f25_9943_1f47835f7ac5.slice - libcontainer container kubepods-besteffort-pod8e60b79c_a2aa_4f25_9943_1f47835f7ac5.slice. Sep 4 17:41:27.870003 kubelet[2565]: I0904 17:41:27.869950 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48smm\" (UniqueName: \"kubernetes.io/projected/071fe88c-2cad-4860-bad5-d35ab9417623-kube-api-access-48smm\") pod \"calico-typha-5b4bc9549b-9c5dd\" (UID: \"071fe88c-2cad-4860-bad5-d35ab9417623\") " pod="calico-system/calico-typha-5b4bc9549b-9c5dd" Sep 4 17:41:27.870003 kubelet[2565]: I0904 17:41:27.870002 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/071fe88c-2cad-4860-bad5-d35ab9417623-tigera-ca-bundle\") pod \"calico-typha-5b4bc9549b-9c5dd\" (UID: \"071fe88c-2cad-4860-bad5-d35ab9417623\") " pod="calico-system/calico-typha-5b4bc9549b-9c5dd" Sep 4 17:41:27.870003 kubelet[2565]: I0904 17:41:27.870024 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/071fe88c-2cad-4860-bad5-d35ab9417623-typha-certs\") pod \"calico-typha-5b4bc9549b-9c5dd\" (UID: \"071fe88c-2cad-4860-bad5-d35ab9417623\") " pod="calico-system/calico-typha-5b4bc9549b-9c5dd" Sep 4 17:41:27.970507 kubelet[2565]: I0904 17:41:27.970336 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-lib-modules\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.970507 kubelet[2565]: I0904 17:41:27.970380 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-policysync\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.970507 kubelet[2565]: I0904 17:41:27.970401 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-tigera-ca-bundle\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.970507 kubelet[2565]: I0904 17:41:27.970421 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-cni-log-dir\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.970507 kubelet[2565]: I0904 17:41:27.970492 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-cni-bin-dir\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.971497 kubelet[2565]: I0904 17:41:27.970518 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-flexvol-driver-host\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.971497 kubelet[2565]: I0904 17:41:27.970539 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-xtables-lock\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.971497 kubelet[2565]: I0904 17:41:27.970572 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-node-certs\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.971497 kubelet[2565]: I0904 17:41:27.970591 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-var-lib-calico\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.971497 kubelet[2565]: I0904 17:41:27.970644 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-cni-net-dir\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.971804 kubelet[2565]: I0904 17:41:27.970706 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fkbz\" (UniqueName: \"kubernetes.io/projected/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-kube-api-access-8fkbz\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:27.971804 kubelet[2565]: I0904 17:41:27.970760 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8e60b79c-a2aa-4f25-9943-1f47835f7ac5-var-run-calico\") pod \"calico-node-p9f9k\" (UID: \"8e60b79c-a2aa-4f25-9943-1f47835f7ac5\") " pod="calico-system/calico-node-p9f9k" Sep 4 17:41:28.036104 kubelet[2565]: I0904 17:41:28.034380 2565 topology_manager.go:215] "Topology Admit Handler" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" podNamespace="calico-system" podName="csi-node-driver-fjlxj" Sep 4 17:41:28.036104 kubelet[2565]: E0904 17:41:28.034656 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:28.072756 kubelet[2565]: E0904 17:41:28.072508 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.072756 kubelet[2565]: W0904 17:41:28.072535 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.072756 kubelet[2565]: E0904 17:41:28.072554 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.075875 kubelet[2565]: E0904 17:41:28.075844 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.075875 kubelet[2565]: W0904 17:41:28.075866 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.076015 kubelet[2565]: E0904 17:41:28.075889 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.079487 kubelet[2565]: E0904 17:41:28.079202 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.079487 kubelet[2565]: W0904 17:41:28.079231 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.079487 kubelet[2565]: E0904 17:41:28.079261 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.080085 kubelet[2565]: E0904 17:41:28.079923 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.080085 kubelet[2565]: W0904 17:41:28.079939 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.080085 kubelet[2565]: E0904 17:41:28.079956 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.080506 kubelet[2565]: E0904 17:41:28.080374 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.080506 kubelet[2565]: W0904 17:41:28.080389 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.080506 kubelet[2565]: E0904 17:41:28.080405 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.080938 kubelet[2565]: E0904 17:41:28.080792 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.080938 kubelet[2565]: W0904 17:41:28.080806 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.080938 kubelet[2565]: E0904 17:41:28.080823 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.081405 kubelet[2565]: E0904 17:41:28.081255 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.081405 kubelet[2565]: W0904 17:41:28.081271 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.081405 kubelet[2565]: E0904 17:41:28.081286 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.081689 kubelet[2565]: E0904 17:41:28.081673 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.081689 kubelet[2565]: W0904 17:41:28.081686 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.081776 kubelet[2565]: E0904 17:41:28.081701 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.081948 kubelet[2565]: E0904 17:41:28.081935 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.081948 kubelet[2565]: W0904 17:41:28.081947 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.082006 kubelet[2565]: E0904 17:41:28.081962 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.082258 kubelet[2565]: E0904 17:41:28.082168 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.082258 kubelet[2565]: W0904 17:41:28.082183 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.082258 kubelet[2565]: E0904 17:41:28.082196 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.082501 kubelet[2565]: E0904 17:41:28.082443 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.082501 kubelet[2565]: W0904 17:41:28.082452 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.082501 kubelet[2565]: E0904 17:41:28.082466 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.082854 kubelet[2565]: E0904 17:41:28.082809 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.082854 kubelet[2565]: W0904 17:41:28.082826 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.082854 kubelet[2565]: E0904 17:41:28.082849 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.083961 kubelet[2565]: E0904 17:41:28.083147 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.083961 kubelet[2565]: W0904 17:41:28.083160 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.083961 kubelet[2565]: E0904 17:41:28.083201 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.083961 kubelet[2565]: E0904 17:41:28.083497 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.083961 kubelet[2565]: W0904 17:41:28.083507 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.083961 kubelet[2565]: E0904 17:41:28.083533 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.083961 kubelet[2565]: E0904 17:41:28.083822 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.083961 kubelet[2565]: W0904 17:41:28.083836 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.083961 kubelet[2565]: E0904 17:41:28.083849 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.084250 kubelet[2565]: E0904 17:41:28.084191 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.084250 kubelet[2565]: W0904 17:41:28.084202 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.084250 kubelet[2565]: E0904 17:41:28.084216 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.084627 kubelet[2565]: E0904 17:41:28.084611 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.084627 kubelet[2565]: W0904 17:41:28.084624 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.084700 kubelet[2565]: E0904 17:41:28.084639 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.084922 kubelet[2565]: E0904 17:41:28.084887 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.084922 kubelet[2565]: W0904 17:41:28.084899 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.084922 kubelet[2565]: E0904 17:41:28.084913 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.085142 kubelet[2565]: E0904 17:41:28.085125 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.085142 kubelet[2565]: W0904 17:41:28.085140 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.085241 kubelet[2565]: E0904 17:41:28.085154 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.085447 kubelet[2565]: E0904 17:41:28.085421 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.085635 kubelet[2565]: W0904 17:41:28.085542 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.085635 kubelet[2565]: E0904 17:41:28.085561 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.086056 kubelet[2565]: E0904 17:41:28.085938 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.086056 kubelet[2565]: W0904 17:41:28.085950 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.086056 kubelet[2565]: E0904 17:41:28.085964 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.086654 kubelet[2565]: E0904 17:41:28.086513 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.086654 kubelet[2565]: W0904 17:41:28.086527 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.086654 kubelet[2565]: E0904 17:41:28.086541 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.172394 kubelet[2565]: E0904 17:41:28.172345 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.172394 kubelet[2565]: W0904 17:41:28.172367 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.172394 kubelet[2565]: E0904 17:41:28.172388 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.172638 kubelet[2565]: E0904 17:41:28.172612 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.172638 kubelet[2565]: W0904 17:41:28.172621 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.172638 kubelet[2565]: E0904 17:41:28.172631 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.172719 kubelet[2565]: I0904 17:41:28.172670 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5-kubelet-dir\") pod \"csi-node-driver-fjlxj\" (UID: \"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5\") " pod="calico-system/csi-node-driver-fjlxj" Sep 4 17:41:28.172910 kubelet[2565]: E0904 17:41:28.172892 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.172910 kubelet[2565]: W0904 17:41:28.172907 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.172988 kubelet[2565]: E0904 17:41:28.172925 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.172988 kubelet[2565]: I0904 17:41:28.172948 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5-registration-dir\") pod \"csi-node-driver-fjlxj\" (UID: \"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5\") " pod="calico-system/csi-node-driver-fjlxj" Sep 4 17:41:28.173179 kubelet[2565]: E0904 17:41:28.173155 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.173179 kubelet[2565]: W0904 17:41:28.173171 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.173248 kubelet[2565]: E0904 17:41:28.173189 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.173248 kubelet[2565]: I0904 17:41:28.173212 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld44p\" (UniqueName: \"kubernetes.io/projected/3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5-kube-api-access-ld44p\") pod \"csi-node-driver-fjlxj\" (UID: \"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5\") " pod="calico-system/csi-node-driver-fjlxj" Sep 4 17:41:28.173441 kubelet[2565]: E0904 17:41:28.173416 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.173441 kubelet[2565]: W0904 17:41:28.173430 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.173505 kubelet[2565]: E0904 17:41:28.173448 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.173505 kubelet[2565]: I0904 17:41:28.173467 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5-varrun\") pod \"csi-node-driver-fjlxj\" (UID: \"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5\") " pod="calico-system/csi-node-driver-fjlxj" Sep 4 17:41:28.173698 kubelet[2565]: E0904 17:41:28.173671 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.173698 kubelet[2565]: W0904 17:41:28.173686 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.173779 kubelet[2565]: E0904 17:41:28.173704 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.173779 kubelet[2565]: I0904 17:41:28.173725 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5-socket-dir\") pod \"csi-node-driver-fjlxj\" (UID: \"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5\") " pod="calico-system/csi-node-driver-fjlxj" Sep 4 17:41:28.173956 kubelet[2565]: E0904 17:41:28.173931 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.173956 kubelet[2565]: W0904 17:41:28.173945 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.174013 kubelet[2565]: E0904 17:41:28.173980 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.174214 kubelet[2565]: E0904 17:41:28.174196 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.174214 kubelet[2565]: W0904 17:41:28.174209 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.174279 kubelet[2565]: E0904 17:41:28.174246 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.174472 kubelet[2565]: E0904 17:41:28.174454 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.174472 kubelet[2565]: W0904 17:41:28.174466 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.174537 kubelet[2565]: E0904 17:41:28.174502 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.174721 kubelet[2565]: E0904 17:41:28.174702 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.174721 kubelet[2565]: W0904 17:41:28.174714 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.174784 kubelet[2565]: E0904 17:41:28.174751 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.174944 kubelet[2565]: E0904 17:41:28.174927 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.174944 kubelet[2565]: W0904 17:41:28.174939 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.175011 kubelet[2565]: E0904 17:41:28.174972 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.175175 kubelet[2565]: E0904 17:41:28.175157 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.175175 kubelet[2565]: W0904 17:41:28.175170 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.175260 kubelet[2565]: E0904 17:41:28.175185 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.175439 kubelet[2565]: E0904 17:41:28.175421 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.175486 kubelet[2565]: W0904 17:41:28.175434 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.175486 kubelet[2565]: E0904 17:41:28.175452 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.175694 kubelet[2565]: E0904 17:41:28.175675 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.175694 kubelet[2565]: W0904 17:41:28.175688 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.175769 kubelet[2565]: E0904 17:41:28.175700 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.175931 kubelet[2565]: E0904 17:41:28.175913 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.175931 kubelet[2565]: W0904 17:41:28.175925 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.176082 kubelet[2565]: E0904 17:41:28.175938 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.176175 kubelet[2565]: E0904 17:41:28.176157 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.176175 kubelet[2565]: W0904 17:41:28.176170 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.176236 kubelet[2565]: E0904 17:41:28.176182 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.205529 kubelet[2565]: E0904 17:41:28.205494 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.205529 kubelet[2565]: W0904 17:41:28.205519 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.205726 kubelet[2565]: E0904 17:41:28.205550 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.274766 kubelet[2565]: E0904 17:41:28.274626 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.274766 kubelet[2565]: W0904 17:41:28.274646 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.274766 kubelet[2565]: E0904 17:41:28.274674 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.275055 kubelet[2565]: E0904 17:41:28.275016 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.275055 kubelet[2565]: W0904 17:41:28.275041 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.275333 kubelet[2565]: E0904 17:41:28.275075 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.275571 kubelet[2565]: E0904 17:41:28.275552 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.275571 kubelet[2565]: W0904 17:41:28.275567 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.275912 kubelet[2565]: E0904 17:41:28.275586 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.276027 kubelet[2565]: E0904 17:41:28.275997 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.276027 kubelet[2565]: W0904 17:41:28.276011 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.276114 kubelet[2565]: E0904 17:41:28.276041 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.276285 kubelet[2565]: E0904 17:41:28.276264 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.276285 kubelet[2565]: W0904 17:41:28.276279 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.276370 kubelet[2565]: E0904 17:41:28.276348 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.276607 kubelet[2565]: E0904 17:41:28.276586 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.276660 kubelet[2565]: W0904 17:41:28.276607 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.276660 kubelet[2565]: E0904 17:41:28.276624 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.276875 kubelet[2565]: E0904 17:41:28.276859 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.276875 kubelet[2565]: W0904 17:41:28.276873 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.276931 kubelet[2565]: E0904 17:41:28.276894 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.277149 kubelet[2565]: E0904 17:41:28.277130 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.277149 kubelet[2565]: W0904 17:41:28.277142 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.277275 kubelet[2565]: E0904 17:41:28.277258 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.277505 kubelet[2565]: E0904 17:41:28.277490 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.277505 kubelet[2565]: W0904 17:41:28.277503 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.277621 kubelet[2565]: E0904 17:41:28.277591 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.277936 kubelet[2565]: E0904 17:41:28.277801 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.277936 kubelet[2565]: W0904 17:41:28.277815 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.277936 kubelet[2565]: E0904 17:41:28.277858 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.278118 kubelet[2565]: E0904 17:41:28.278088 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.278118 kubelet[2565]: W0904 17:41:28.278100 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.278333 kubelet[2565]: E0904 17:41:28.278198 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.278401 kubelet[2565]: E0904 17:41:28.278377 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.278401 kubelet[2565]: W0904 17:41:28.278398 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.278487 kubelet[2565]: E0904 17:41:28.278423 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.278777 kubelet[2565]: E0904 17:41:28.278740 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.278777 kubelet[2565]: W0904 17:41:28.278758 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.278777 kubelet[2565]: E0904 17:41:28.278775 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.278997 kubelet[2565]: E0904 17:41:28.278978 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.278997 kubelet[2565]: W0904 17:41:28.278990 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.278997 kubelet[2565]: E0904 17:41:28.279005 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.281680 kubelet[2565]: E0904 17:41:28.281658 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.281842 kubelet[2565]: W0904 17:41:28.281739 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.282111 kubelet[2565]: E0904 17:41:28.282074 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.282191 kubelet[2565]: E0904 17:41:28.282174 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.282234 kubelet[2565]: W0904 17:41:28.282192 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.282443 kubelet[2565]: E0904 17:41:28.282410 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.282477 kubelet[2565]: W0904 17:41:28.282443 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.282664 kubelet[2565]: E0904 17:41:28.282651 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.282664 kubelet[2565]: W0904 17:41:28.282663 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.282733 kubelet[2565]: E0904 17:41:28.282687 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.282978 kubelet[2565]: E0904 17:41:28.282957 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.283023 kubelet[2565]: E0904 17:41:28.282994 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.283912 kubelet[2565]: E0904 17:41:28.283895 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.283944 kubelet[2565]: W0904 17:41:28.283912 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.283944 kubelet[2565]: E0904 17:41:28.283937 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.284223 kubelet[2565]: E0904 17:41:28.284205 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.284223 kubelet[2565]: W0904 17:41:28.284220 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.284344 kubelet[2565]: E0904 17:41:28.284244 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.284507 kubelet[2565]: E0904 17:41:28.284487 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.284507 kubelet[2565]: W0904 17:41:28.284507 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.284572 kubelet[2565]: E0904 17:41:28.284541 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.284772 kubelet[2565]: E0904 17:41:28.284758 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.284772 kubelet[2565]: W0904 17:41:28.284768 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.284850 kubelet[2565]: E0904 17:41:28.284801 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.285026 kubelet[2565]: E0904 17:41:28.285013 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.285026 kubelet[2565]: W0904 17:41:28.285022 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.285089 kubelet[2565]: E0904 17:41:28.285039 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.285268 kubelet[2565]: E0904 17:41:28.285255 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.285268 kubelet[2565]: W0904 17:41:28.285263 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.285425 kubelet[2565]: E0904 17:41:28.285278 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.285545 kubelet[2565]: E0904 17:41:28.285532 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.285545 kubelet[2565]: W0904 17:41:28.285542 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.285614 kubelet[2565]: E0904 17:41:28.285556 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.331109 kubelet[2565]: E0904 17:41:28.331051 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:28.331890 containerd[1446]: time="2024-09-04T17:41:28.331827180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b4bc9549b-9c5dd,Uid:071fe88c-2cad-4860-bad5-d35ab9417623,Namespace:calico-system,Attempt:0,}" Sep 4 17:41:28.341820 kubelet[2565]: E0904 17:41:28.341736 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:28.341820 kubelet[2565]: W0904 17:41:28.341757 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:28.341820 kubelet[2565]: E0904 17:41:28.341779 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:28.377590 containerd[1446]: time="2024-09-04T17:41:28.377476437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:28.377590 containerd[1446]: time="2024-09-04T17:41:28.377533684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:28.377590 containerd[1446]: time="2024-09-04T17:41:28.377549344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:28.379463 containerd[1446]: time="2024-09-04T17:41:28.377928999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:28.402632 systemd[1]: Started cri-containerd-c520805384284447d379396ccf706740270463c519f9aff5cf9d38aea237055e.scope - libcontainer container c520805384284447d379396ccf706740270463c519f9aff5cf9d38aea237055e. Sep 4 17:41:28.407517 kubelet[2565]: E0904 17:41:28.407476 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:28.408251 containerd[1446]: time="2024-09-04T17:41:28.408193900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p9f9k,Uid:8e60b79c-a2aa-4f25-9943-1f47835f7ac5,Namespace:calico-system,Attempt:0,}" Sep 4 17:41:28.448840 containerd[1446]: time="2024-09-04T17:41:28.448788368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b4bc9549b-9c5dd,Uid:071fe88c-2cad-4860-bad5-d35ab9417623,Namespace:calico-system,Attempt:0,} returns sandbox id \"c520805384284447d379396ccf706740270463c519f9aff5cf9d38aea237055e\"" Sep 4 17:41:28.449888 kubelet[2565]: E0904 17:41:28.449860 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:28.451232 containerd[1446]: time="2024-09-04T17:41:28.451190451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:41:28.691625 containerd[1446]: time="2024-09-04T17:41:28.691398260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:28.692296 containerd[1446]: time="2024-09-04T17:41:28.691704968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:28.692636 containerd[1446]: time="2024-09-04T17:41:28.692531905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:28.695401 containerd[1446]: time="2024-09-04T17:41:28.692745447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:28.718772 systemd[1]: Started cri-containerd-6004c59a842d7eabbcc8bc690e32633765515c52cf804bc79c68fd04ffb4a741.scope - libcontainer container 6004c59a842d7eabbcc8bc690e32633765515c52cf804bc79c68fd04ffb4a741. Sep 4 17:41:28.754893 containerd[1446]: time="2024-09-04T17:41:28.754832782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p9f9k,Uid:8e60b79c-a2aa-4f25-9943-1f47835f7ac5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6004c59a842d7eabbcc8bc690e32633765515c52cf804bc79c68fd04ffb4a741\"" Sep 4 17:41:28.755957 kubelet[2565]: E0904 17:41:28.755882 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:30.414342 kubelet[2565]: E0904 17:41:30.414257 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:31.944026 containerd[1446]: time="2024-09-04T17:41:31.943926829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:31.944919 containerd[1446]: time="2024-09-04T17:41:31.944843154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:41:31.946264 containerd[1446]: time="2024-09-04T17:41:31.946233941Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:31.948641 containerd[1446]: time="2024-09-04T17:41:31.948589493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:31.949352 containerd[1446]: time="2024-09-04T17:41:31.949299570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.498063823s" Sep 4 17:41:31.949394 containerd[1446]: time="2024-09-04T17:41:31.949358009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:41:31.950356 containerd[1446]: time="2024-09-04T17:41:31.949912122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:41:31.960273 containerd[1446]: time="2024-09-04T17:41:31.960220930Z" level=info msg="CreateContainer within sandbox \"c520805384284447d379396ccf706740270463c519f9aff5cf9d38aea237055e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:41:31.980460 containerd[1446]: time="2024-09-04T17:41:31.980395523Z" level=info msg="CreateContainer within sandbox \"c520805384284447d379396ccf706740270463c519f9aff5cf9d38aea237055e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9ac0ddaa4121d741c7e70f2c7b29091b1e90ed4a56a68cdfc7c9b3c7448f92fc\"" Sep 4 17:41:31.980960 containerd[1446]: time="2024-09-04T17:41:31.980841161Z" level=info msg="StartContainer for \"9ac0ddaa4121d741c7e70f2c7b29091b1e90ed4a56a68cdfc7c9b3c7448f92fc\"" Sep 4 17:41:32.016548 systemd[1]: Started cri-containerd-9ac0ddaa4121d741c7e70f2c7b29091b1e90ed4a56a68cdfc7c9b3c7448f92fc.scope - libcontainer container 9ac0ddaa4121d741c7e70f2c7b29091b1e90ed4a56a68cdfc7c9b3c7448f92fc. Sep 4 17:41:32.063686 containerd[1446]: time="2024-09-04T17:41:32.063628893Z" level=info msg="StartContainer for \"9ac0ddaa4121d741c7e70f2c7b29091b1e90ed4a56a68cdfc7c9b3c7448f92fc\" returns successfully" Sep 4 17:41:32.414104 kubelet[2565]: E0904 17:41:32.414052 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:32.468797 kubelet[2565]: E0904 17:41:32.468761 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:32.503500 kubelet[2565]: I0904 17:41:32.503450 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5b4bc9549b-9c5dd" podStartSLOduration=2.004615864 podStartE2EDuration="5.503401175s" podCreationTimestamp="2024-09-04 17:41:27 +0000 UTC" firstStartedPulling="2024-09-04 17:41:28.450905505 +0000 UTC m=+21.161250835" lastFinishedPulling="2024-09-04 17:41:31.949690816 +0000 UTC m=+24.660036146" observedRunningTime="2024-09-04 17:41:32.503011712 +0000 UTC m=+25.213357042" watchObservedRunningTime="2024-09-04 17:41:32.503401175 +0000 UTC m=+25.213746505" Sep 4 17:41:32.521043 kubelet[2565]: E0904 17:41:32.520993 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.521043 kubelet[2565]: W0904 17:41:32.521023 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.521043 kubelet[2565]: E0904 17:41:32.521051 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.521381 kubelet[2565]: E0904 17:41:32.521342 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.521381 kubelet[2565]: W0904 17:41:32.521356 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.521381 kubelet[2565]: E0904 17:41:32.521368 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.521730 kubelet[2565]: E0904 17:41:32.521690 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.521730 kubelet[2565]: W0904 17:41:32.521720 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.521826 kubelet[2565]: E0904 17:41:32.521760 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.522129 kubelet[2565]: E0904 17:41:32.522106 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.522129 kubelet[2565]: W0904 17:41:32.522119 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.522224 kubelet[2565]: E0904 17:41:32.522134 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.522443 kubelet[2565]: E0904 17:41:32.522422 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.522443 kubelet[2565]: W0904 17:41:32.522435 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.522548 kubelet[2565]: E0904 17:41:32.522451 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.522736 kubelet[2565]: E0904 17:41:32.522712 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.522736 kubelet[2565]: W0904 17:41:32.522725 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.522818 kubelet[2565]: E0904 17:41:32.522740 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.523005 kubelet[2565]: E0904 17:41:32.522983 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.523005 kubelet[2565]: W0904 17:41:32.522997 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.523103 kubelet[2565]: E0904 17:41:32.523012 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.523334 kubelet[2565]: E0904 17:41:32.523295 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.523334 kubelet[2565]: W0904 17:41:32.523322 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.523334 kubelet[2565]: E0904 17:41:32.523337 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.523625 kubelet[2565]: E0904 17:41:32.523602 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.523625 kubelet[2565]: W0904 17:41:32.523615 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.523723 kubelet[2565]: E0904 17:41:32.523630 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.523910 kubelet[2565]: E0904 17:41:32.523887 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.523910 kubelet[2565]: W0904 17:41:32.523900 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.523992 kubelet[2565]: E0904 17:41:32.523915 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.524198 kubelet[2565]: E0904 17:41:32.524175 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.524198 kubelet[2565]: W0904 17:41:32.524187 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.524288 kubelet[2565]: E0904 17:41:32.524203 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.524470 kubelet[2565]: E0904 17:41:32.524448 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.524470 kubelet[2565]: W0904 17:41:32.524460 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.524566 kubelet[2565]: E0904 17:41:32.524477 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.524780 kubelet[2565]: E0904 17:41:32.524756 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.524780 kubelet[2565]: W0904 17:41:32.524768 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.524780 kubelet[2565]: E0904 17:41:32.524783 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.525027 kubelet[2565]: E0904 17:41:32.525005 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.525027 kubelet[2565]: W0904 17:41:32.525017 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.525027 kubelet[2565]: E0904 17:41:32.525031 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.525278 kubelet[2565]: E0904 17:41:32.525255 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.525278 kubelet[2565]: W0904 17:41:32.525267 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.525278 kubelet[2565]: E0904 17:41:32.525281 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.612116 kubelet[2565]: E0904 17:41:32.612066 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.612116 kubelet[2565]: W0904 17:41:32.612093 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.612116 kubelet[2565]: E0904 17:41:32.612120 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.612458 kubelet[2565]: E0904 17:41:32.612431 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.612458 kubelet[2565]: W0904 17:41:32.612447 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.612545 kubelet[2565]: E0904 17:41:32.612469 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.613172 kubelet[2565]: E0904 17:41:32.613140 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.613172 kubelet[2565]: W0904 17:41:32.613167 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.613241 kubelet[2565]: E0904 17:41:32.613191 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.615746 kubelet[2565]: E0904 17:41:32.615683 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.615746 kubelet[2565]: W0904 17:41:32.615717 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.615981 kubelet[2565]: E0904 17:41:32.615785 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.616425 kubelet[2565]: E0904 17:41:32.616396 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.616425 kubelet[2565]: W0904 17:41:32.616412 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.616520 kubelet[2565]: E0904 17:41:32.616458 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.616696 kubelet[2565]: E0904 17:41:32.616667 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.616696 kubelet[2565]: W0904 17:41:32.616686 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.616880 kubelet[2565]: E0904 17:41:32.616730 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.616965 kubelet[2565]: E0904 17:41:32.616943 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.616965 kubelet[2565]: W0904 17:41:32.616961 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.617038 kubelet[2565]: E0904 17:41:32.616990 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.617274 kubelet[2565]: E0904 17:41:32.617255 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.617274 kubelet[2565]: W0904 17:41:32.617269 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.617389 kubelet[2565]: E0904 17:41:32.617293 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.617540 kubelet[2565]: E0904 17:41:32.617516 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.617540 kubelet[2565]: W0904 17:41:32.617530 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.617609 kubelet[2565]: E0904 17:41:32.617549 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.617789 kubelet[2565]: E0904 17:41:32.617776 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.617789 kubelet[2565]: W0904 17:41:32.617787 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.617866 kubelet[2565]: E0904 17:41:32.617802 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.618045 kubelet[2565]: E0904 17:41:32.618031 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.618045 kubelet[2565]: W0904 17:41:32.618041 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.618119 kubelet[2565]: E0904 17:41:32.618076 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.618224 kubelet[2565]: E0904 17:41:32.618212 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.618224 kubelet[2565]: W0904 17:41:32.618221 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.618291 kubelet[2565]: E0904 17:41:32.618247 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.618408 kubelet[2565]: E0904 17:41:32.618396 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.618408 kubelet[2565]: W0904 17:41:32.618405 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.618475 kubelet[2565]: E0904 17:41:32.618421 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.618780 kubelet[2565]: E0904 17:41:32.618759 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.618780 kubelet[2565]: W0904 17:41:32.618775 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.618863 kubelet[2565]: E0904 17:41:32.618799 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.619090 kubelet[2565]: E0904 17:41:32.619072 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.619090 kubelet[2565]: W0904 17:41:32.619087 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.619163 kubelet[2565]: E0904 17:41:32.619108 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.619392 kubelet[2565]: E0904 17:41:32.619376 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.619392 kubelet[2565]: W0904 17:41:32.619388 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.619478 kubelet[2565]: E0904 17:41:32.619406 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.619629 kubelet[2565]: E0904 17:41:32.619615 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.619629 kubelet[2565]: W0904 17:41:32.619625 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.619688 kubelet[2565]: E0904 17:41:32.619635 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:32.619965 kubelet[2565]: E0904 17:41:32.619946 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:32.619965 kubelet[2565]: W0904 17:41:32.619961 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:32.620058 kubelet[2565]: E0904 17:41:32.619977 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.470133 kubelet[2565]: I0904 17:41:33.470100 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:41:33.470775 kubelet[2565]: E0904 17:41:33.470756 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:33.532385 kubelet[2565]: E0904 17:41:33.532336 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.532385 kubelet[2565]: W0904 17:41:33.532360 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.532385 kubelet[2565]: E0904 17:41:33.532382 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.532659 kubelet[2565]: E0904 17:41:33.532629 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.532659 kubelet[2565]: W0904 17:41:33.532643 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.532659 kubelet[2565]: E0904 17:41:33.532656 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.532875 kubelet[2565]: E0904 17:41:33.532839 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.532875 kubelet[2565]: W0904 17:41:33.532860 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.532875 kubelet[2565]: E0904 17:41:33.532870 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.533060 kubelet[2565]: E0904 17:41:33.533045 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.533060 kubelet[2565]: W0904 17:41:33.533055 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.533122 kubelet[2565]: E0904 17:41:33.533065 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.533262 kubelet[2565]: E0904 17:41:33.533240 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.533262 kubelet[2565]: W0904 17:41:33.533253 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.533262 kubelet[2565]: E0904 17:41:33.533262 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.533528 kubelet[2565]: E0904 17:41:33.533478 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.533528 kubelet[2565]: W0904 17:41:33.533498 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.533528 kubelet[2565]: E0904 17:41:33.533508 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.533732 kubelet[2565]: E0904 17:41:33.533703 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.533732 kubelet[2565]: W0904 17:41:33.533715 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.533732 kubelet[2565]: E0904 17:41:33.533724 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.533916 kubelet[2565]: E0904 17:41:33.533893 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.533916 kubelet[2565]: W0904 17:41:33.533903 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.533916 kubelet[2565]: E0904 17:41:33.533913 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.534111 kubelet[2565]: E0904 17:41:33.534092 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.534111 kubelet[2565]: W0904 17:41:33.534102 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.534111 kubelet[2565]: E0904 17:41:33.534112 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.534347 kubelet[2565]: E0904 17:41:33.534329 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.534347 kubelet[2565]: W0904 17:41:33.534344 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.534422 kubelet[2565]: E0904 17:41:33.534357 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.534568 kubelet[2565]: E0904 17:41:33.534549 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.534568 kubelet[2565]: W0904 17:41:33.534560 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.534624 kubelet[2565]: E0904 17:41:33.534571 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.534758 kubelet[2565]: E0904 17:41:33.534737 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.534758 kubelet[2565]: W0904 17:41:33.534747 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.534758 kubelet[2565]: E0904 17:41:33.534757 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.534945 kubelet[2565]: E0904 17:41:33.534928 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.534945 kubelet[2565]: W0904 17:41:33.534939 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.534997 kubelet[2565]: E0904 17:41:33.534949 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.535131 kubelet[2565]: E0904 17:41:33.535113 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.535131 kubelet[2565]: W0904 17:41:33.535124 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.535131 kubelet[2565]: E0904 17:41:33.535133 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.535324 kubelet[2565]: E0904 17:41:33.535297 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.535324 kubelet[2565]: W0904 17:41:33.535320 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.535366 kubelet[2565]: E0904 17:41:33.535330 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.621496 kubelet[2565]: E0904 17:41:33.621448 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.621496 kubelet[2565]: W0904 17:41:33.621476 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.621682 kubelet[2565]: E0904 17:41:33.621522 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.621972 kubelet[2565]: E0904 17:41:33.621936 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.621972 kubelet[2565]: W0904 17:41:33.621960 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.621972 kubelet[2565]: E0904 17:41:33.621986 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.622355 kubelet[2565]: E0904 17:41:33.622331 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.622355 kubelet[2565]: W0904 17:41:33.622347 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.622472 kubelet[2565]: E0904 17:41:33.622370 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.622707 kubelet[2565]: E0904 17:41:33.622680 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.622707 kubelet[2565]: W0904 17:41:33.622698 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.622813 kubelet[2565]: E0904 17:41:33.622721 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.623000 kubelet[2565]: E0904 17:41:33.622973 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.623000 kubelet[2565]: W0904 17:41:33.622992 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.623113 kubelet[2565]: E0904 17:41:33.623020 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.623344 kubelet[2565]: E0904 17:41:33.623323 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.623344 kubelet[2565]: W0904 17:41:33.623343 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.623465 kubelet[2565]: E0904 17:41:33.623383 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.623630 kubelet[2565]: E0904 17:41:33.623611 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.623630 kubelet[2565]: W0904 17:41:33.623628 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.623753 kubelet[2565]: E0904 17:41:33.623732 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.624013 kubelet[2565]: E0904 17:41:33.623993 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.624013 kubelet[2565]: W0904 17:41:33.624008 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.624096 kubelet[2565]: E0904 17:41:33.624032 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.624353 kubelet[2565]: E0904 17:41:33.624278 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.624353 kubelet[2565]: W0904 17:41:33.624293 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.624353 kubelet[2565]: E0904 17:41:33.624328 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.624582 kubelet[2565]: E0904 17:41:33.624563 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.624582 kubelet[2565]: W0904 17:41:33.624577 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.624684 kubelet[2565]: E0904 17:41:33.624591 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.624870 kubelet[2565]: E0904 17:41:33.624826 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.624870 kubelet[2565]: W0904 17:41:33.624839 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.624870 kubelet[2565]: E0904 17:41:33.624873 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.625108 kubelet[2565]: E0904 17:41:33.625071 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.625108 kubelet[2565]: W0904 17:41:33.625079 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.625108 kubelet[2565]: E0904 17:41:33.625099 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.625393 kubelet[2565]: E0904 17:41:33.625378 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.625393 kubelet[2565]: W0904 17:41:33.625389 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.625474 kubelet[2565]: E0904 17:41:33.625408 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.625683 kubelet[2565]: E0904 17:41:33.625667 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.625683 kubelet[2565]: W0904 17:41:33.625679 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.625768 kubelet[2565]: E0904 17:41:33.625719 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.625889 kubelet[2565]: E0904 17:41:33.625874 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.625889 kubelet[2565]: W0904 17:41:33.625885 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.625949 kubelet[2565]: E0904 17:41:33.625901 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.626107 kubelet[2565]: E0904 17:41:33.626094 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.626107 kubelet[2565]: W0904 17:41:33.626104 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.626188 kubelet[2565]: E0904 17:41:33.626124 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.626456 kubelet[2565]: E0904 17:41:33.626437 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.626456 kubelet[2565]: W0904 17:41:33.626450 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.626547 kubelet[2565]: E0904 17:41:33.626468 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.626687 kubelet[2565]: E0904 17:41:33.626672 2565 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:41:33.626687 kubelet[2565]: W0904 17:41:33.626684 2565 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:41:33.626771 kubelet[2565]: E0904 17:41:33.626697 2565 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:41:33.952056 containerd[1446]: time="2024-09-04T17:41:33.951983164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:33.955497 containerd[1446]: time="2024-09-04T17:41:33.955417344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:41:33.956706 containerd[1446]: time="2024-09-04T17:41:33.956670321Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:33.959190 containerd[1446]: time="2024-09-04T17:41:33.959128846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:33.959760 containerd[1446]: time="2024-09-04T17:41:33.959724195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 2.009778911s" Sep 4 17:41:33.959760 containerd[1446]: time="2024-09-04T17:41:33.959756217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:41:33.961454 containerd[1446]: time="2024-09-04T17:41:33.961417511Z" level=info msg="CreateContainer within sandbox \"6004c59a842d7eabbcc8bc690e32633765515c52cf804bc79c68fd04ffb4a741\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:41:33.978745 containerd[1446]: time="2024-09-04T17:41:33.978680644Z" level=info msg="CreateContainer within sandbox \"6004c59a842d7eabbcc8bc690e32633765515c52cf804bc79c68fd04ffb4a741\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f6e6305b5e9f2a5c8c085a4eb8ab9988bd0cf9fcc3dc1bac55be3cbee7eff661\"" Sep 4 17:41:33.979408 containerd[1446]: time="2024-09-04T17:41:33.979298096Z" level=info msg="StartContainer for \"f6e6305b5e9f2a5c8c085a4eb8ab9988bd0cf9fcc3dc1bac55be3cbee7eff661\"" Sep 4 17:41:34.013521 systemd[1]: Started cri-containerd-f6e6305b5e9f2a5c8c085a4eb8ab9988bd0cf9fcc3dc1bac55be3cbee7eff661.scope - libcontainer container f6e6305b5e9f2a5c8c085a4eb8ab9988bd0cf9fcc3dc1bac55be3cbee7eff661. Sep 4 17:41:34.052934 containerd[1446]: time="2024-09-04T17:41:34.052882933Z" level=info msg="StartContainer for \"f6e6305b5e9f2a5c8c085a4eb8ab9988bd0cf9fcc3dc1bac55be3cbee7eff661\" returns successfully" Sep 4 17:41:34.065330 systemd[1]: cri-containerd-f6e6305b5e9f2a5c8c085a4eb8ab9988bd0cf9fcc3dc1bac55be3cbee7eff661.scope: Deactivated successfully. Sep 4 17:41:34.087929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6e6305b5e9f2a5c8c085a4eb8ab9988bd0cf9fcc3dc1bac55be3cbee7eff661-rootfs.mount: Deactivated successfully. Sep 4 17:41:34.368807 containerd[1446]: time="2024-09-04T17:41:34.366778327Z" level=info msg="shim disconnected" id=f6e6305b5e9f2a5c8c085a4eb8ab9988bd0cf9fcc3dc1bac55be3cbee7eff661 namespace=k8s.io Sep 4 17:41:34.368807 containerd[1446]: time="2024-09-04T17:41:34.368792555Z" level=warning msg="cleaning up after shim disconnected" id=f6e6305b5e9f2a5c8c085a4eb8ab9988bd0cf9fcc3dc1bac55be3cbee7eff661 namespace=k8s.io Sep 4 17:41:34.368807 containerd[1446]: time="2024-09-04T17:41:34.368806681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:41:34.414732 kubelet[2565]: E0904 17:41:34.414663 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:34.473758 kubelet[2565]: E0904 17:41:34.473728 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:34.474362 containerd[1446]: time="2024-09-04T17:41:34.474291039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:41:36.414789 kubelet[2565]: E0904 17:41:36.414698 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:38.093795 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:59630.service - OpenSSH per-connection server daemon (10.0.0.1:59630). Sep 4 17:41:38.167508 sshd[3321]: Accepted publickey for core from 10.0.0.1 port 59630 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:41:38.169660 sshd[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:38.175377 systemd-logind[1427]: New session 8 of user core. Sep 4 17:41:38.180551 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:41:38.414625 kubelet[2565]: E0904 17:41:38.414571 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:38.519488 sshd[3321]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:38.523974 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:59630.service: Deactivated successfully. Sep 4 17:41:38.526497 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:41:38.527198 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:41:38.528076 systemd-logind[1427]: Removed session 8. Sep 4 17:41:40.420915 kubelet[2565]: E0904 17:41:40.420841 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:42.414911 kubelet[2565]: E0904 17:41:42.414832 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:42.904024 containerd[1446]: time="2024-09-04T17:41:42.903965038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:42.904745 containerd[1446]: time="2024-09-04T17:41:42.904675814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:41:42.905775 containerd[1446]: time="2024-09-04T17:41:42.905744262Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:42.908172 containerd[1446]: time="2024-09-04T17:41:42.908137289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:42.908833 containerd[1446]: time="2024-09-04T17:41:42.908803581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 8.43445816s" Sep 4 17:41:42.908871 containerd[1446]: time="2024-09-04T17:41:42.908832646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:41:42.913494 containerd[1446]: time="2024-09-04T17:41:42.913456214Z" level=info msg="CreateContainer within sandbox \"6004c59a842d7eabbcc8bc690e32633765515c52cf804bc79c68fd04ffb4a741\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:41:42.930065 containerd[1446]: time="2024-09-04T17:41:42.929990435Z" level=info msg="CreateContainer within sandbox \"6004c59a842d7eabbcc8bc690e32633765515c52cf804bc79c68fd04ffb4a741\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8a1f587f44e0d206d1791561621515dfbeb1e05830fb920385cf1602f4332faa\"" Sep 4 17:41:42.930642 containerd[1446]: time="2024-09-04T17:41:42.930601874Z" level=info msg="StartContainer for \"8a1f587f44e0d206d1791561621515dfbeb1e05830fb920385cf1602f4332faa\"" Sep 4 17:41:42.978812 systemd[1]: Started cri-containerd-8a1f587f44e0d206d1791561621515dfbeb1e05830fb920385cf1602f4332faa.scope - libcontainer container 8a1f587f44e0d206d1791561621515dfbeb1e05830fb920385cf1602f4332faa. Sep 4 17:41:43.125146 containerd[1446]: time="2024-09-04T17:41:43.125082934Z" level=info msg="StartContainer for \"8a1f587f44e0d206d1791561621515dfbeb1e05830fb920385cf1602f4332faa\" returns successfully" Sep 4 17:41:43.522647 kubelet[2565]: E0904 17:41:43.522602 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:43.532778 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:59642.service - OpenSSH per-connection server daemon (10.0.0.1:59642). Sep 4 17:41:43.679773 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 59642 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:41:43.682519 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:43.688628 systemd-logind[1427]: New session 9 of user core. Sep 4 17:41:43.697469 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:41:43.821019 sshd[3387]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:43.825692 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:59642.service: Deactivated successfully. Sep 4 17:41:43.827883 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:41:43.828597 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:41:43.829681 systemd-logind[1427]: Removed session 9. Sep 4 17:41:44.311800 systemd[1]: cri-containerd-8a1f587f44e0d206d1791561621515dfbeb1e05830fb920385cf1602f4332faa.scope: Deactivated successfully. Sep 4 17:41:44.333803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a1f587f44e0d206d1791561621515dfbeb1e05830fb920385cf1602f4332faa-rootfs.mount: Deactivated successfully. Sep 4 17:41:44.391566 kubelet[2565]: I0904 17:41:44.391521 2565 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:41:44.419785 systemd[1]: Created slice kubepods-besteffort-pod3e3dbe29_0d6e_49c3_ba6a_a1f8d88c0ba5.slice - libcontainer container kubepods-besteffort-pod3e3dbe29_0d6e_49c3_ba6a_a1f8d88c0ba5.slice. Sep 4 17:41:44.434559 kubelet[2565]: I0904 17:41:44.434516 2565 topology_manager.go:215] "Topology Admit Handler" podUID="281f014c-f5de-43ce-935e-8fa5520141f6" podNamespace="calico-system" podName="calico-kube-controllers-77b8956656-5z2q5" Sep 4 17:41:44.435346 kubelet[2565]: I0904 17:41:44.435295 2565 topology_manager.go:215] "Topology Admit Handler" podUID="912e9b42-ab77-443f-87ef-4e4df49a2075" podNamespace="kube-system" podName="coredns-76f75df574-4kpcw" Sep 4 17:41:44.435453 kubelet[2565]: I0904 17:41:44.435430 2565 topology_manager.go:215] "Topology Admit Handler" podUID="69449cbf-db23-4a42-849e-f85741ba3407" podNamespace="kube-system" podName="coredns-76f75df574-6s9q5" Sep 4 17:41:44.441477 systemd[1]: Created slice kubepods-besteffort-pod281f014c_f5de_43ce_935e_8fa5520141f6.slice - libcontainer container kubepods-besteffort-pod281f014c_f5de_43ce_935e_8fa5520141f6.slice. Sep 4 17:41:44.445914 systemd[1]: Created slice kubepods-burstable-pod912e9b42_ab77_443f_87ef_4e4df49a2075.slice - libcontainer container kubepods-burstable-pod912e9b42_ab77_443f_87ef_4e4df49a2075.slice. Sep 4 17:41:44.450370 systemd[1]: Created slice kubepods-burstable-pod69449cbf_db23_4a42_849e_f85741ba3407.slice - libcontainer container kubepods-burstable-pod69449cbf_db23_4a42_849e_f85741ba3407.slice. Sep 4 17:41:44.480851 containerd[1446]: time="2024-09-04T17:41:44.480810103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fjlxj,Uid:3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5,Namespace:calico-system,Attempt:0,}" Sep 4 17:41:44.485850 containerd[1446]: time="2024-09-04T17:41:44.485764361Z" level=info msg="shim disconnected" id=8a1f587f44e0d206d1791561621515dfbeb1e05830fb920385cf1602f4332faa namespace=k8s.io Sep 4 17:41:44.485850 containerd[1446]: time="2024-09-04T17:41:44.485848348Z" level=warning msg="cleaning up after shim disconnected" id=8a1f587f44e0d206d1791561621515dfbeb1e05830fb920385cf1602f4332faa namespace=k8s.io Sep 4 17:41:44.485990 containerd[1446]: time="2024-09-04T17:41:44.485861864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:41:44.526184 kubelet[2565]: E0904 17:41:44.526142 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:44.530929 kubelet[2565]: I0904 17:41:44.530552 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m86hb\" (UniqueName: \"kubernetes.io/projected/281f014c-f5de-43ce-935e-8fa5520141f6-kube-api-access-m86hb\") pod \"calico-kube-controllers-77b8956656-5z2q5\" (UID: \"281f014c-f5de-43ce-935e-8fa5520141f6\") " pod="calico-system/calico-kube-controllers-77b8956656-5z2q5" Sep 4 17:41:44.530929 kubelet[2565]: I0904 17:41:44.530609 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzw9\" (UniqueName: \"kubernetes.io/projected/69449cbf-db23-4a42-849e-f85741ba3407-kube-api-access-vzzw9\") pod \"coredns-76f75df574-6s9q5\" (UID: \"69449cbf-db23-4a42-849e-f85741ba3407\") " pod="kube-system/coredns-76f75df574-6s9q5" Sep 4 17:41:44.530929 kubelet[2565]: I0904 17:41:44.530644 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/912e9b42-ab77-443f-87ef-4e4df49a2075-config-volume\") pod \"coredns-76f75df574-4kpcw\" (UID: \"912e9b42-ab77-443f-87ef-4e4df49a2075\") " pod="kube-system/coredns-76f75df574-4kpcw" Sep 4 17:41:44.530929 kubelet[2565]: I0904 17:41:44.530667 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkn6t\" (UniqueName: \"kubernetes.io/projected/912e9b42-ab77-443f-87ef-4e4df49a2075-kube-api-access-mkn6t\") pod \"coredns-76f75df574-4kpcw\" (UID: \"912e9b42-ab77-443f-87ef-4e4df49a2075\") " pod="kube-system/coredns-76f75df574-4kpcw" Sep 4 17:41:44.530929 kubelet[2565]: I0904 17:41:44.530705 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69449cbf-db23-4a42-849e-f85741ba3407-config-volume\") pod \"coredns-76f75df574-6s9q5\" (UID: \"69449cbf-db23-4a42-849e-f85741ba3407\") " pod="kube-system/coredns-76f75df574-6s9q5" Sep 4 17:41:44.531854 kubelet[2565]: I0904 17:41:44.530726 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/281f014c-f5de-43ce-935e-8fa5520141f6-tigera-ca-bundle\") pod \"calico-kube-controllers-77b8956656-5z2q5\" (UID: \"281f014c-f5de-43ce-935e-8fa5520141f6\") " pod="calico-system/calico-kube-controllers-77b8956656-5z2q5" Sep 4 17:41:44.533532 containerd[1446]: time="2024-09-04T17:41:44.532783115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:41:44.572285 containerd[1446]: time="2024-09-04T17:41:44.572132569Z" level=error msg="Failed to destroy network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.573054 containerd[1446]: time="2024-09-04T17:41:44.572647567Z" level=error msg="encountered an error cleaning up failed sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.573054 containerd[1446]: time="2024-09-04T17:41:44.572714682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fjlxj,Uid:3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.573167 kubelet[2565]: E0904 17:41:44.573120 2565 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.573223 kubelet[2565]: E0904 17:41:44.573215 2565 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fjlxj" Sep 4 17:41:44.573269 kubelet[2565]: E0904 17:41:44.573251 2565 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fjlxj" Sep 4 17:41:44.573375 kubelet[2565]: E0904 17:41:44.573346 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fjlxj_calico-system(3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fjlxj_calico-system(3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:44.574730 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da-shm.mount: Deactivated successfully. Sep 4 17:41:44.744529 containerd[1446]: time="2024-09-04T17:41:44.744489231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b8956656-5z2q5,Uid:281f014c-f5de-43ce-935e-8fa5520141f6,Namespace:calico-system,Attempt:0,}" Sep 4 17:41:44.748793 kubelet[2565]: E0904 17:41:44.748769 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:44.749651 containerd[1446]: time="2024-09-04T17:41:44.749621043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4kpcw,Uid:912e9b42-ab77-443f-87ef-4e4df49a2075,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:44.752591 kubelet[2565]: E0904 17:41:44.752559 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:44.753102 containerd[1446]: time="2024-09-04T17:41:44.753069512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6s9q5,Uid:69449cbf-db23-4a42-849e-f85741ba3407,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:44.823286 containerd[1446]: time="2024-09-04T17:41:44.822993152Z" level=error msg="Failed to destroy network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.824059 containerd[1446]: time="2024-09-04T17:41:44.824024380Z" level=error msg="encountered an error cleaning up failed sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.824171 containerd[1446]: time="2024-09-04T17:41:44.824089001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b8956656-5z2q5,Uid:281f014c-f5de-43ce-935e-8fa5520141f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.824467 kubelet[2565]: E0904 17:41:44.824429 2565 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.824639 kubelet[2565]: E0904 17:41:44.824618 2565 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b8956656-5z2q5" Sep 4 17:41:44.824702 kubelet[2565]: E0904 17:41:44.824652 2565 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b8956656-5z2q5" Sep 4 17:41:44.824741 kubelet[2565]: E0904 17:41:44.824720 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77b8956656-5z2q5_calico-system(281f014c-f5de-43ce-935e-8fa5520141f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77b8956656-5z2q5_calico-system(281f014c-f5de-43ce-935e-8fa5520141f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77b8956656-5z2q5" podUID="281f014c-f5de-43ce-935e-8fa5520141f6" Sep 4 17:41:44.836336 containerd[1446]: time="2024-09-04T17:41:44.836245572Z" level=error msg="Failed to destroy network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.836776 containerd[1446]: time="2024-09-04T17:41:44.836741093Z" level=error msg="encountered an error cleaning up failed sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.836849 containerd[1446]: time="2024-09-04T17:41:44.836805885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6s9q5,Uid:69449cbf-db23-4a42-849e-f85741ba3407,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.837345 kubelet[2565]: E0904 17:41:44.837101 2565 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.837345 kubelet[2565]: E0904 17:41:44.837174 2565 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6s9q5" Sep 4 17:41:44.837345 kubelet[2565]: E0904 17:41:44.837203 2565 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6s9q5" Sep 4 17:41:44.837491 kubelet[2565]: E0904 17:41:44.837289 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-6s9q5_kube-system(69449cbf-db23-4a42-849e-f85741ba3407)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-6s9q5_kube-system(69449cbf-db23-4a42-849e-f85741ba3407)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6s9q5" podUID="69449cbf-db23-4a42-849e-f85741ba3407" Sep 4 17:41:44.841572 containerd[1446]: time="2024-09-04T17:41:44.841508591Z" level=error msg="Failed to destroy network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.842078 containerd[1446]: time="2024-09-04T17:41:44.842038106Z" level=error msg="encountered an error cleaning up failed sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.842140 containerd[1446]: time="2024-09-04T17:41:44.842103939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4kpcw,Uid:912e9b42-ab77-443f-87ef-4e4df49a2075,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.842368 kubelet[2565]: E0904 17:41:44.842343 2565 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:44.842437 kubelet[2565]: E0904 17:41:44.842388 2565 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4kpcw" Sep 4 17:41:44.842437 kubelet[2565]: E0904 17:41:44.842407 2565 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4kpcw" Sep 4 17:41:44.842511 kubelet[2565]: E0904 17:41:44.842463 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4kpcw_kube-system(912e9b42-ab77-443f-87ef-4e4df49a2075)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4kpcw_kube-system(912e9b42-ab77-443f-87ef-4e4df49a2075)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4kpcw" podUID="912e9b42-ab77-443f-87ef-4e4df49a2075" Sep 4 17:41:45.529283 kubelet[2565]: I0904 17:41:45.529242 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:41:45.530253 containerd[1446]: time="2024-09-04T17:41:45.530067628Z" level=info msg="StopPodSandbox for \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\"" Sep 4 17:41:45.530738 containerd[1446]: time="2024-09-04T17:41:45.530716266Z" level=info msg="Ensure that sandbox 84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8 in task-service has been cleanup successfully" Sep 4 17:41:45.531327 kubelet[2565]: I0904 17:41:45.531056 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:41:45.531608 containerd[1446]: time="2024-09-04T17:41:45.531566825Z" level=info msg="StopPodSandbox for \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\"" Sep 4 17:41:45.532504 containerd[1446]: time="2024-09-04T17:41:45.532420379Z" level=info msg="Ensure that sandbox b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14 in task-service has been cleanup successfully" Sep 4 17:41:45.538251 kubelet[2565]: I0904 17:41:45.538201 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:41:45.539878 containerd[1446]: time="2024-09-04T17:41:45.539829658Z" level=info msg="StopPodSandbox for \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\"" Sep 4 17:41:45.540111 containerd[1446]: time="2024-09-04T17:41:45.540074599Z" level=info msg="Ensure that sandbox ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592 in task-service has been cleanup successfully" Sep 4 17:41:45.542039 kubelet[2565]: I0904 17:41:45.541986 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:41:45.542796 containerd[1446]: time="2024-09-04T17:41:45.542755475Z" level=info msg="StopPodSandbox for \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\"" Sep 4 17:41:45.543916 containerd[1446]: time="2024-09-04T17:41:45.543590885Z" level=info msg="Ensure that sandbox dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da in task-service has been cleanup successfully" Sep 4 17:41:45.583349 containerd[1446]: time="2024-09-04T17:41:45.583260726Z" level=error msg="StopPodSandbox for \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\" failed" error="failed to destroy network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:45.583573 containerd[1446]: time="2024-09-04T17:41:45.583497069Z" level=error msg="StopPodSandbox for \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\" failed" error="failed to destroy network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:45.583956 kubelet[2565]: E0904 17:41:45.583738 2565 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:41:45.583956 kubelet[2565]: E0904 17:41:45.583855 2565 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14"} Sep 4 17:41:45.583956 kubelet[2565]: E0904 17:41:45.583904 2565 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"912e9b42-ab77-443f-87ef-4e4df49a2075\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:41:45.583956 kubelet[2565]: E0904 17:41:45.583951 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"912e9b42-ab77-443f-87ef-4e4df49a2075\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4kpcw" podUID="912e9b42-ab77-443f-87ef-4e4df49a2075" Sep 4 17:41:45.584194 kubelet[2565]: E0904 17:41:45.584039 2565 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:41:45.584194 kubelet[2565]: E0904 17:41:45.584063 2565 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8"} Sep 4 17:41:45.584194 kubelet[2565]: E0904 17:41:45.584155 2565 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69449cbf-db23-4a42-849e-f85741ba3407\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:41:45.584194 kubelet[2565]: E0904 17:41:45.584192 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69449cbf-db23-4a42-849e-f85741ba3407\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6s9q5" podUID="69449cbf-db23-4a42-849e-f85741ba3407" Sep 4 17:41:45.591986 containerd[1446]: time="2024-09-04T17:41:45.591899706Z" level=error msg="StopPodSandbox for \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\" failed" error="failed to destroy network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:45.592239 kubelet[2565]: E0904 17:41:45.592185 2565 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:41:45.592239 kubelet[2565]: E0904 17:41:45.592241 2565 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da"} Sep 4 17:41:45.592445 kubelet[2565]: E0904 17:41:45.592283 2565 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:41:45.592445 kubelet[2565]: E0904 17:41:45.592341 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fjlxj" podUID="3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5" Sep 4 17:41:45.593588 containerd[1446]: time="2024-09-04T17:41:45.593537293Z" level=error msg="StopPodSandbox for \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\" failed" error="failed to destroy network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:41:45.593935 kubelet[2565]: E0904 17:41:45.593893 2565 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:41:45.593985 kubelet[2565]: E0904 17:41:45.593971 2565 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592"} Sep 4 17:41:45.594090 kubelet[2565]: E0904 17:41:45.594065 2565 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"281f014c-f5de-43ce-935e-8fa5520141f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:41:45.594140 kubelet[2565]: E0904 17:41:45.594119 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"281f014c-f5de-43ce-935e-8fa5520141f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77b8956656-5z2q5" podUID="281f014c-f5de-43ce-935e-8fa5520141f6" Sep 4 17:41:48.835083 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:59812.service - OpenSSH per-connection server daemon (10.0.0.1:59812). Sep 4 17:41:48.890846 sshd[3671]: Accepted publickey for core from 10.0.0.1 port 59812 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:41:48.893149 sshd[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:48.898694 systemd-logind[1427]: New session 10 of user core. Sep 4 17:41:48.907459 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:41:48.928165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181461206.mount: Deactivated successfully. Sep 4 17:41:49.593558 containerd[1446]: time="2024-09-04T17:41:49.593478806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:49.594801 containerd[1446]: time="2024-09-04T17:41:49.594515303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:41:49.596202 containerd[1446]: time="2024-09-04T17:41:49.596163278Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:49.599095 containerd[1446]: time="2024-09-04T17:41:49.599065520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:49.600070 containerd[1446]: time="2024-09-04T17:41:49.600035231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.067204086s" Sep 4 17:41:49.600111 containerd[1446]: time="2024-09-04T17:41:49.600069265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:41:49.616598 containerd[1446]: time="2024-09-04T17:41:49.616545002Z" level=info msg="CreateContainer within sandbox \"6004c59a842d7eabbcc8bc690e32633765515c52cf804bc79c68fd04ffb4a741\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:41:49.638092 sshd[3671]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:49.646948 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:59812.service: Deactivated successfully. Sep 4 17:41:49.648859 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:41:49.656673 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:41:49.658760 containerd[1446]: time="2024-09-04T17:41:49.658704601Z" level=info msg="CreateContainer within sandbox \"6004c59a842d7eabbcc8bc690e32633765515c52cf804bc79c68fd04ffb4a741\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9d37e33a0da83fb49967e0a8cfe673e2e8eb62270037c0926af22c718b985af7\"" Sep 4 17:41:49.659454 containerd[1446]: time="2024-09-04T17:41:49.659373047Z" level=info msg="StartContainer for \"9d37e33a0da83fb49967e0a8cfe673e2e8eb62270037c0926af22c718b985af7\"" Sep 4 17:41:49.663616 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:59816.service - OpenSSH per-connection server daemon (10.0.0.1:59816). Sep 4 17:41:49.666217 systemd-logind[1427]: Removed session 10. Sep 4 17:41:49.699884 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 59816 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:41:49.701734 sshd[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:49.706486 systemd-logind[1427]: New session 11 of user core. Sep 4 17:41:49.712487 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:41:49.733649 systemd[1]: Started cri-containerd-9d37e33a0da83fb49967e0a8cfe673e2e8eb62270037c0926af22c718b985af7.scope - libcontainer container 9d37e33a0da83fb49967e0a8cfe673e2e8eb62270037c0926af22c718b985af7. Sep 4 17:41:49.769754 containerd[1446]: time="2024-09-04T17:41:49.769677844Z" level=info msg="StartContainer for \"9d37e33a0da83fb49967e0a8cfe673e2e8eb62270037c0926af22c718b985af7\" returns successfully" Sep 4 17:41:49.843979 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:41:49.844243 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:41:49.924070 sshd[3689]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:49.935585 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:59816.service: Deactivated successfully. Sep 4 17:41:49.938592 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:41:49.945482 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:41:49.952538 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:59826.service - OpenSSH per-connection server daemon (10.0.0.1:59826). Sep 4 17:41:49.955271 systemd-logind[1427]: Removed session 11. Sep 4 17:41:50.003959 sshd[3755]: Accepted publickey for core from 10.0.0.1 port 59826 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:41:50.005362 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:50.012349 systemd-logind[1427]: New session 12 of user core. Sep 4 17:41:50.018552 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:41:50.158731 sshd[3755]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:50.163254 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:59826.service: Deactivated successfully. Sep 4 17:41:50.165582 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:41:50.166242 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:41:50.167207 systemd-logind[1427]: Removed session 12. Sep 4 17:41:50.555223 kubelet[2565]: E0904 17:41:50.555178 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:51.555973 kubelet[2565]: I0904 17:41:51.555925 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:41:51.556828 kubelet[2565]: E0904 17:41:51.556806 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:51.958941 kubelet[2565]: I0904 17:41:51.958874 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:41:51.959745 kubelet[2565]: E0904 17:41:51.959723 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:51.974182 kubelet[2565]: I0904 17:41:51.974035 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-p9f9k" podStartSLOduration=4.130240233 podStartE2EDuration="24.973952014s" podCreationTimestamp="2024-09-04 17:41:27 +0000 UTC" firstStartedPulling="2024-09-04 17:41:28.756689287 +0000 UTC m=+21.467034617" lastFinishedPulling="2024-09-04 17:41:49.600401068 +0000 UTC m=+42.310746398" observedRunningTime="2024-09-04 17:41:50.57163349 +0000 UTC m=+43.281978821" watchObservedRunningTime="2024-09-04 17:41:51.973952014 +0000 UTC m=+44.684297354" Sep 4 17:41:52.472427 kernel: bpftool[3943]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:41:52.558485 kubelet[2565]: E0904 17:41:52.558452 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:52.726175 systemd-networkd[1375]: vxlan.calico: Link UP Sep 4 17:41:52.726187 systemd-networkd[1375]: vxlan.calico: Gained carrier Sep 4 17:41:54.257619 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Sep 4 17:41:55.172365 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:59832.service - OpenSSH per-connection server daemon (10.0.0.1:59832). Sep 4 17:41:55.346577 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 59832 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:41:55.348456 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:55.352579 systemd-logind[1427]: New session 13 of user core. Sep 4 17:41:55.366444 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:41:55.419382 kubelet[2565]: I0904 17:41:55.419343 2565 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:41:55.422315 kubelet[2565]: E0904 17:41:55.420233 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:55.506855 sshd[4023]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:55.511849 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:59832.service: Deactivated successfully. Sep 4 17:41:55.514350 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:41:55.514966 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:41:55.515864 systemd-logind[1427]: Removed session 13. Sep 4 17:41:55.569596 kubelet[2565]: E0904 17:41:55.569524 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:55.573906 systemd[1]: run-containerd-runc-k8s.io-9d37e33a0da83fb49967e0a8cfe673e2e8eb62270037c0926af22c718b985af7-runc.YUUCJx.mount: Deactivated successfully. Sep 4 17:41:56.416297 containerd[1446]: time="2024-09-04T17:41:56.416218946Z" level=info msg="StopPodSandbox for \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\"" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.562 [INFO][4099] k8s.go 608: Cleaning up netns ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.564 [INFO][4099] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" iface="eth0" netns="/var/run/netns/cni-9a1aac71-3ab7-24c2-fd47-8966243da6a2" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.564 [INFO][4099] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" iface="eth0" netns="/var/run/netns/cni-9a1aac71-3ab7-24c2-fd47-8966243da6a2" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.565 [INFO][4099] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" iface="eth0" netns="/var/run/netns/cni-9a1aac71-3ab7-24c2-fd47-8966243da6a2" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.565 [INFO][4099] k8s.go 615: Releasing IP address(es) ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.565 [INFO][4099] utils.go 188: Calico CNI releasing IP address ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.619 [INFO][4108] ipam_plugin.go 417: Releasing address using handleID ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" HandleID="k8s-pod-network.84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.620 [INFO][4108] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.620 [INFO][4108] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.628 [WARNING][4108] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" HandleID="k8s-pod-network.84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.628 [INFO][4108] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" HandleID="k8s-pod-network.84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.629 [INFO][4108] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:41:56.635725 containerd[1446]: 2024-09-04 17:41:56.632 [INFO][4099] k8s.go 621: Teardown processing complete. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:41:56.636261 containerd[1446]: time="2024-09-04T17:41:56.635991967Z" level=info msg="TearDown network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\" successfully" Sep 4 17:41:56.636261 containerd[1446]: time="2024-09-04T17:41:56.636030409Z" level=info msg="StopPodSandbox for \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\" returns successfully" Sep 4 17:41:56.636971 kubelet[2565]: E0904 17:41:56.636602 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:56.637397 containerd[1446]: time="2024-09-04T17:41:56.637107021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6s9q5,Uid:69449cbf-db23-4a42-849e-f85741ba3407,Namespace:kube-system,Attempt:1,}" Sep 4 17:41:56.639319 systemd[1]: run-netns-cni\x2d9a1aac71\x2d3ab7\x2d24c2\x2dfd47\x2d8966243da6a2.mount: Deactivated successfully. Sep 4 17:41:57.288873 systemd-networkd[1375]: cali1b3ce2a3f70: Link UP Sep 4 17:41:57.289153 systemd-networkd[1375]: cali1b3ce2a3f70: Gained carrier Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.186 [INFO][4116] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--6s9q5-eth0 coredns-76f75df574- kube-system 69449cbf-db23-4a42-849e-f85741ba3407 823 0 2024-09-04 17:41:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-6s9q5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1b3ce2a3f70 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Namespace="kube-system" Pod="coredns-76f75df574-6s9q5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6s9q5-" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.186 [INFO][4116] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Namespace="kube-system" Pod="coredns-76f75df574-6s9q5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.221 [INFO][4130] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" HandleID="k8s-pod-network.04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.260 [INFO][4130] ipam_plugin.go 270: Auto assigning IP ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" HandleID="k8s-pod-network.04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5690), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-6s9q5", "timestamp":"2024-09-04 17:41:57.221736503 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.260 [INFO][4130] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.260 [INFO][4130] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.260 [INFO][4130] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.262 [INFO][4130] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" host="localhost" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.266 [INFO][4130] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.270 [INFO][4130] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.272 [INFO][4130] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.273 [INFO][4130] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.273 [INFO][4130] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" host="localhost" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.274 [INFO][4130] ipam.go 1685: Creating new handle: k8s-pod-network.04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25 Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.277 [INFO][4130] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" host="localhost" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.283 [INFO][4130] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" host="localhost" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.283 [INFO][4130] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" host="localhost" Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.283 [INFO][4130] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:41:57.309789 containerd[1446]: 2024-09-04 17:41:57.283 [INFO][4130] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" HandleID="k8s-pod-network.04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:57.310832 containerd[1446]: 2024-09-04 17:41:57.286 [INFO][4116] k8s.go 386: Populated endpoint ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Namespace="kube-system" Pod="coredns-76f75df574-6s9q5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6s9q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--6s9q5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69449cbf-db23-4a42-849e-f85741ba3407", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-6s9q5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b3ce2a3f70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:41:57.310832 containerd[1446]: 2024-09-04 17:41:57.286 [INFO][4116] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Namespace="kube-system" Pod="coredns-76f75df574-6s9q5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:57.310832 containerd[1446]: 2024-09-04 17:41:57.286 [INFO][4116] dataplane_linux.go 68: Setting the host side veth name to cali1b3ce2a3f70 ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Namespace="kube-system" Pod="coredns-76f75df574-6s9q5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:57.310832 containerd[1446]: 2024-09-04 17:41:57.288 [INFO][4116] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Namespace="kube-system" Pod="coredns-76f75df574-6s9q5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:57.310832 containerd[1446]: 2024-09-04 17:41:57.288 [INFO][4116] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Namespace="kube-system" Pod="coredns-76f75df574-6s9q5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6s9q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--6s9q5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69449cbf-db23-4a42-849e-f85741ba3407", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25", Pod:"coredns-76f75df574-6s9q5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b3ce2a3f70", MAC:"8e:a9:f0:b2:47:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:41:57.310832 containerd[1446]: 2024-09-04 17:41:57.306 [INFO][4116] k8s.go 500: Wrote updated endpoint to datastore ContainerID="04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25" Namespace="kube-system" Pod="coredns-76f75df574-6s9q5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:41:57.468331 containerd[1446]: time="2024-09-04T17:41:57.468207673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:57.468331 containerd[1446]: time="2024-09-04T17:41:57.468258769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:57.468331 containerd[1446]: time="2024-09-04T17:41:57.468269649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:57.468826 containerd[1446]: time="2024-09-04T17:41:57.468363766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:57.492578 systemd[1]: Started cri-containerd-04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25.scope - libcontainer container 04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25. Sep 4 17:41:57.508068 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:41:57.542843 containerd[1446]: time="2024-09-04T17:41:57.542539045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6s9q5,Uid:69449cbf-db23-4a42-849e-f85741ba3407,Namespace:kube-system,Attempt:1,} returns sandbox id \"04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25\"" Sep 4 17:41:57.543530 kubelet[2565]: E0904 17:41:57.543469 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:57.546078 containerd[1446]: time="2024-09-04T17:41:57.546038606Z" level=info msg="CreateContainer within sandbox \"04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:41:57.868156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3259640830.mount: Deactivated successfully. Sep 4 17:41:57.877402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983603514.mount: Deactivated successfully. Sep 4 17:41:58.080983 containerd[1446]: time="2024-09-04T17:41:58.080918385Z" level=info msg="CreateContainer within sandbox \"04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c24ac56d28ee7c4e9c48630ae3f7b8b0e5a02c8fb042632dc01c5c57117aca6\"" Sep 4 17:41:58.081610 containerd[1446]: time="2024-09-04T17:41:58.081575019Z" level=info msg="StartContainer for \"0c24ac56d28ee7c4e9c48630ae3f7b8b0e5a02c8fb042632dc01c5c57117aca6\"" Sep 4 17:41:58.114504 systemd[1]: Started cri-containerd-0c24ac56d28ee7c4e9c48630ae3f7b8b0e5a02c8fb042632dc01c5c57117aca6.scope - libcontainer container 0c24ac56d28ee7c4e9c48630ae3f7b8b0e5a02c8fb042632dc01c5c57117aca6. Sep 4 17:41:58.367106 containerd[1446]: time="2024-09-04T17:41:58.367047103Z" level=info msg="StartContainer for \"0c24ac56d28ee7c4e9c48630ae3f7b8b0e5a02c8fb042632dc01c5c57117aca6\" returns successfully" Sep 4 17:41:58.589504 kubelet[2565]: E0904 17:41:58.589459 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:58.664419 kubelet[2565]: I0904 17:41:58.664250 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6s9q5" podStartSLOduration=36.664192847 podStartE2EDuration="36.664192847s" podCreationTimestamp="2024-09-04 17:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:58.601497227 +0000 UTC m=+51.311842557" watchObservedRunningTime="2024-09-04 17:41:58.664192847 +0000 UTC m=+51.374538178" Sep 4 17:41:58.993501 systemd-networkd[1375]: cali1b3ce2a3f70: Gained IPv6LL Sep 4 17:41:59.415718 containerd[1446]: time="2024-09-04T17:41:59.414843961Z" level=info msg="StopPodSandbox for \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\"" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.457 [INFO][4258] k8s.go 608: Cleaning up netns ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.457 [INFO][4258] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" iface="eth0" netns="/var/run/netns/cni-d42673be-0743-5120-f09d-cc8a3190d709" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.458 [INFO][4258] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" iface="eth0" netns="/var/run/netns/cni-d42673be-0743-5120-f09d-cc8a3190d709" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.459 [INFO][4258] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" iface="eth0" netns="/var/run/netns/cni-d42673be-0743-5120-f09d-cc8a3190d709" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.459 [INFO][4258] k8s.go 615: Releasing IP address(es) ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.459 [INFO][4258] utils.go 188: Calico CNI releasing IP address ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.485 [INFO][4266] ipam_plugin.go 417: Releasing address using handleID ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" HandleID="k8s-pod-network.b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.485 [INFO][4266] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.485 [INFO][4266] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.491 [WARNING][4266] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" HandleID="k8s-pod-network.b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.491 [INFO][4266] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" HandleID="k8s-pod-network.b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.492 [INFO][4266] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:41:59.498577 containerd[1446]: 2024-09-04 17:41:59.495 [INFO][4258] k8s.go 621: Teardown processing complete. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:41:59.501723 containerd[1446]: time="2024-09-04T17:41:59.501455803Z" level=info msg="TearDown network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\" successfully" Sep 4 17:41:59.501723 containerd[1446]: time="2024-09-04T17:41:59.501497782Z" level=info msg="StopPodSandbox for \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\" returns successfully" Sep 4 17:41:59.501412 systemd[1]: run-netns-cni\x2dd42673be\x2d0743\x2d5120\x2df09d\x2dcc8a3190d709.mount: Deactivated successfully. Sep 4 17:41:59.502241 kubelet[2565]: E0904 17:41:59.502119 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:59.502834 containerd[1446]: time="2024-09-04T17:41:59.502672848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4kpcw,Uid:912e9b42-ab77-443f-87ef-4e4df49a2075,Namespace:kube-system,Attempt:1,}" Sep 4 17:41:59.591457 kubelet[2565]: E0904 17:41:59.591420 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:59.618543 systemd-networkd[1375]: cali973e76f36d8: Link UP Sep 4 17:41:59.619062 systemd-networkd[1375]: cali973e76f36d8: Gained carrier Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.555 [INFO][4275] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--4kpcw-eth0 coredns-76f75df574- kube-system 912e9b42-ab77-443f-87ef-4e4df49a2075 856 0 2024-09-04 17:41:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-4kpcw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali973e76f36d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Namespace="kube-system" Pod="coredns-76f75df574-4kpcw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4kpcw-" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.555 [INFO][4275] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Namespace="kube-system" Pod="coredns-76f75df574-4kpcw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.585 [INFO][4287] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" HandleID="k8s-pod-network.6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.592 [INFO][4287] ipam_plugin.go 270: Auto assigning IP ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" HandleID="k8s-pod-network.6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380400), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-4kpcw", "timestamp":"2024-09-04 17:41:59.585162631 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.592 [INFO][4287] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.593 [INFO][4287] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.593 [INFO][4287] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.594 [INFO][4287] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" host="localhost" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.599 [INFO][4287] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.603 [INFO][4287] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.604 [INFO][4287] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.606 [INFO][4287] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.606 [INFO][4287] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" host="localhost" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.607 [INFO][4287] ipam.go 1685: Creating new handle: k8s-pod-network.6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952 Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.610 [INFO][4287] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" host="localhost" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.613 [INFO][4287] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" host="localhost" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.613 [INFO][4287] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" host="localhost" Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.613 [INFO][4287] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:41:59.631804 containerd[1446]: 2024-09-04 17:41:59.613 [INFO][4287] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" HandleID="k8s-pod-network.6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.632463 containerd[1446]: 2024-09-04 17:41:59.616 [INFO][4275] k8s.go 386: Populated endpoint ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Namespace="kube-system" Pod="coredns-76f75df574-4kpcw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4kpcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4kpcw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"912e9b42-ab77-443f-87ef-4e4df49a2075", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-4kpcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali973e76f36d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:41:59.632463 containerd[1446]: 2024-09-04 17:41:59.616 [INFO][4275] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Namespace="kube-system" Pod="coredns-76f75df574-4kpcw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.632463 containerd[1446]: 2024-09-04 17:41:59.616 [INFO][4275] dataplane_linux.go 68: Setting the host side veth name to cali973e76f36d8 ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Namespace="kube-system" Pod="coredns-76f75df574-4kpcw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.632463 containerd[1446]: 2024-09-04 17:41:59.618 [INFO][4275] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Namespace="kube-system" Pod="coredns-76f75df574-4kpcw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.632463 containerd[1446]: 2024-09-04 17:41:59.619 [INFO][4275] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Namespace="kube-system" Pod="coredns-76f75df574-4kpcw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4kpcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4kpcw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"912e9b42-ab77-443f-87ef-4e4df49a2075", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952", Pod:"coredns-76f75df574-4kpcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali973e76f36d8", MAC:"66:35:e4:28:71:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:41:59.632463 containerd[1446]: 2024-09-04 17:41:59.625 [INFO][4275] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952" Namespace="kube-system" Pod="coredns-76f75df574-4kpcw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:41:59.659369 containerd[1446]: time="2024-09-04T17:41:59.659238093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:59.659369 containerd[1446]: time="2024-09-04T17:41:59.659337260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:59.659369 containerd[1446]: time="2024-09-04T17:41:59.659352147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:59.659561 containerd[1446]: time="2024-09-04T17:41:59.659437428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:59.682445 systemd[1]: Started cri-containerd-6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952.scope - libcontainer container 6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952. Sep 4 17:41:59.696803 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:41:59.725736 containerd[1446]: time="2024-09-04T17:41:59.725692989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4kpcw,Uid:912e9b42-ab77-443f-87ef-4e4df49a2075,Namespace:kube-system,Attempt:1,} returns sandbox id \"6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952\"" Sep 4 17:41:59.727432 kubelet[2565]: E0904 17:41:59.726962 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:41:59.729488 containerd[1446]: time="2024-09-04T17:41:59.729110696Z" level=info msg="CreateContainer within sandbox \"6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:41:59.743763 containerd[1446]: time="2024-09-04T17:41:59.743719064Z" level=info msg="CreateContainer within sandbox \"6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff03da07458985d65e4f5fff678ee1f3763f91f096a80ab0ed0dd7ac80cd4259\"" Sep 4 17:41:59.744507 containerd[1446]: time="2024-09-04T17:41:59.744388712Z" level=info msg="StartContainer for \"ff03da07458985d65e4f5fff678ee1f3763f91f096a80ab0ed0dd7ac80cd4259\"" Sep 4 17:41:59.779480 systemd[1]: Started cri-containerd-ff03da07458985d65e4f5fff678ee1f3763f91f096a80ab0ed0dd7ac80cd4259.scope - libcontainer container ff03da07458985d65e4f5fff678ee1f3763f91f096a80ab0ed0dd7ac80cd4259. Sep 4 17:41:59.809537 containerd[1446]: time="2024-09-04T17:41:59.809495516Z" level=info msg="StartContainer for \"ff03da07458985d65e4f5fff678ee1f3763f91f096a80ab0ed0dd7ac80cd4259\" returns successfully" Sep 4 17:42:00.415607 containerd[1446]: time="2024-09-04T17:42:00.415539047Z" level=info msg="StopPodSandbox for \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\"" Sep 4 17:42:00.415802 containerd[1446]: time="2024-09-04T17:42:00.415778416Z" level=info msg="StopPodSandbox for \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\"" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.460 [INFO][4419] k8s.go 608: Cleaning up netns ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.461 [INFO][4419] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" iface="eth0" netns="/var/run/netns/cni-85aafc1b-4146-549a-956b-bf592d625a56" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.461 [INFO][4419] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" iface="eth0" netns="/var/run/netns/cni-85aafc1b-4146-549a-956b-bf592d625a56" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.461 [INFO][4419] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" iface="eth0" netns="/var/run/netns/cni-85aafc1b-4146-549a-956b-bf592d625a56" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.461 [INFO][4419] k8s.go 615: Releasing IP address(es) ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.462 [INFO][4419] utils.go 188: Calico CNI releasing IP address ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.490 [INFO][4434] ipam_plugin.go 417: Releasing address using handleID ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" HandleID="k8s-pod-network.dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.491 [INFO][4434] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.491 [INFO][4434] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.496 [WARNING][4434] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" HandleID="k8s-pod-network.dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.496 [INFO][4434] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" HandleID="k8s-pod-network.dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.498 [INFO][4434] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:00.504050 containerd[1446]: 2024-09-04 17:42:00.501 [INFO][4419] k8s.go 621: Teardown processing complete. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:00.507323 containerd[1446]: time="2024-09-04T17:42:00.505524248Z" level=info msg="TearDown network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\" successfully" Sep 4 17:42:00.507323 containerd[1446]: time="2024-09-04T17:42:00.505562900Z" level=info msg="StopPodSandbox for \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\" returns successfully" Sep 4 17:42:00.509339 containerd[1446]: time="2024-09-04T17:42:00.508094744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fjlxj,Uid:3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5,Namespace:calico-system,Attempt:1,}" Sep 4 17:42:00.509346 systemd[1]: run-netns-cni\x2d85aafc1b\x2d4146\x2d549a\x2d956b\x2dbf592d625a56.mount: Deactivated successfully. Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.463 [INFO][4418] k8s.go 608: Cleaning up netns ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.463 [INFO][4418] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" iface="eth0" netns="/var/run/netns/cni-f1ebf804-1b3c-2d00-e7cc-7e37d26930f9" Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.463 [INFO][4418] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" iface="eth0" netns="/var/run/netns/cni-f1ebf804-1b3c-2d00-e7cc-7e37d26930f9" Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.463 [INFO][4418] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" iface="eth0" netns="/var/run/netns/cni-f1ebf804-1b3c-2d00-e7cc-7e37d26930f9" Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.464 [INFO][4418] k8s.go 615: Releasing IP address(es) ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.464 [INFO][4418] utils.go 188: Calico CNI releasing IP address ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.491 [INFO][4435] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" HandleID="k8s-pod-network.ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.491 [INFO][4435] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.498 [INFO][4435] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.504 [WARNING][4435] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" HandleID="k8s-pod-network.ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.504 [INFO][4435] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" HandleID="k8s-pod-network.ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.505 [INFO][4435] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:00.511876 containerd[1446]: 2024-09-04 17:42:00.508 [INFO][4418] k8s.go 621: Teardown processing complete. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:00.512523 containerd[1446]: time="2024-09-04T17:42:00.512054178Z" level=info msg="TearDown network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\" successfully" Sep 4 17:42:00.512523 containerd[1446]: time="2024-09-04T17:42:00.512079596Z" level=info msg="StopPodSandbox for \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\" returns successfully" Sep 4 17:42:00.512902 containerd[1446]: time="2024-09-04T17:42:00.512848770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b8956656-5z2q5,Uid:281f014c-f5de-43ce-935e-8fa5520141f6,Namespace:calico-system,Attempt:1,}" Sep 4 17:42:00.521759 systemd[1]: run-netns-cni\x2df1ebf804\x2d1b3c\x2d2d00\x2de7cc\x2d7e37d26930f9.mount: Deactivated successfully. Sep 4 17:42:00.529657 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:44102.service - OpenSSH per-connection server daemon (10.0.0.1:44102). Sep 4 17:42:00.576994 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 44102 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:00.579050 sshd[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:00.589767 systemd-logind[1427]: New session 14 of user core. Sep 4 17:42:00.594586 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:42:00.617076 kubelet[2565]: E0904 17:42:00.617026 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:42:00.619695 kubelet[2565]: E0904 17:42:00.619417 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:42:00.627214 kubelet[2565]: I0904 17:42:00.627137 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4kpcw" podStartSLOduration=38.627105101 podStartE2EDuration="38.627105101s" podCreationTimestamp="2024-09-04 17:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:42:00.62676384 +0000 UTC m=+53.337109170" watchObservedRunningTime="2024-09-04 17:42:00.627105101 +0000 UTC m=+53.337450431" Sep 4 17:42:00.677271 systemd-networkd[1375]: cali988d9741bce: Link UP Sep 4 17:42:00.678628 systemd-networkd[1375]: cali988d9741bce: Gained carrier Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.571 [INFO][4451] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fjlxj-eth0 csi-node-driver- calico-system 3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5 869 0 2024-09-04 17:41:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-fjlxj eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali988d9741bce [] []}} ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Namespace="calico-system" Pod="csi-node-driver-fjlxj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjlxj-" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.572 [INFO][4451] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Namespace="calico-system" Pod="csi-node-driver-fjlxj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.620 [INFO][4482] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" HandleID="k8s-pod-network.90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.634 [INFO][4482] ipam_plugin.go 270: Auto assigning IP ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" HandleID="k8s-pod-network.90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011e150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fjlxj", "timestamp":"2024-09-04 17:42:00.620734269 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.634 [INFO][4482] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.635 [INFO][4482] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.635 [INFO][4482] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.638 [INFO][4482] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" host="localhost" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.648 [INFO][4482] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.654 [INFO][4482] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.656 [INFO][4482] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.658 [INFO][4482] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.658 [INFO][4482] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" host="localhost" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.659 [INFO][4482] ipam.go 1685: Creating new handle: k8s-pod-network.90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.662 [INFO][4482] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" host="localhost" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.666 [INFO][4482] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" host="localhost" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.666 [INFO][4482] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" host="localhost" Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.666 [INFO][4482] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:00.699829 containerd[1446]: 2024-09-04 17:42:00.666 [INFO][4482] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" HandleID="k8s-pod-network.90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.700686 containerd[1446]: 2024-09-04 17:42:00.671 [INFO][4451] k8s.go 386: Populated endpoint ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Namespace="calico-system" Pod="csi-node-driver-fjlxj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjlxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fjlxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fjlxj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali988d9741bce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:00.700686 containerd[1446]: 2024-09-04 17:42:00.671 [INFO][4451] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Namespace="calico-system" Pod="csi-node-driver-fjlxj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.700686 containerd[1446]: 2024-09-04 17:42:00.671 [INFO][4451] dataplane_linux.go 68: Setting the host side veth name to cali988d9741bce ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Namespace="calico-system" Pod="csi-node-driver-fjlxj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.700686 containerd[1446]: 2024-09-04 17:42:00.679 [INFO][4451] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Namespace="calico-system" Pod="csi-node-driver-fjlxj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.700686 containerd[1446]: 2024-09-04 17:42:00.680 [INFO][4451] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Namespace="calico-system" Pod="csi-node-driver-fjlxj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjlxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fjlxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da", Pod:"csi-node-driver-fjlxj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali988d9741bce", MAC:"a2:72:7d:8f:ea:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:00.700686 containerd[1446]: 2024-09-04 17:42:00.691 [INFO][4451] k8s.go 500: Wrote updated endpoint to datastore ContainerID="90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da" Namespace="calico-system" Pod="csi-node-driver-fjlxj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:00.723504 systemd-networkd[1375]: cali523aca75cd6: Link UP Sep 4 17:42:00.726576 systemd-networkd[1375]: cali523aca75cd6: Gained carrier Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.577 [INFO][4458] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0 calico-kube-controllers-77b8956656- calico-system 281f014c-f5de-43ce-935e-8fa5520141f6 870 0 2024-09-04 17:41:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77b8956656 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77b8956656-5z2q5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali523aca75cd6 [] []}} ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Namespace="calico-system" Pod="calico-kube-controllers-77b8956656-5z2q5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.577 [INFO][4458] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Namespace="calico-system" Pod="calico-kube-controllers-77b8956656-5z2q5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.630 [INFO][4487] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" HandleID="k8s-pod-network.602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.642 [INFO][4487] ipam_plugin.go 270: Auto assigning IP ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" HandleID="k8s-pod-network.602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037ce50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77b8956656-5z2q5", "timestamp":"2024-09-04 17:42:00.630958396 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.642 [INFO][4487] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.667 [INFO][4487] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.667 [INFO][4487] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.670 [INFO][4487] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" host="localhost" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.673 [INFO][4487] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.683 [INFO][4487] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.688 [INFO][4487] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.694 [INFO][4487] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.695 [INFO][4487] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" host="localhost" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.704 [INFO][4487] ipam.go 1685: Creating new handle: k8s-pod-network.602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.709 [INFO][4487] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" host="localhost" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.715 [INFO][4487] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" host="localhost" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.715 [INFO][4487] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" host="localhost" Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.715 [INFO][4487] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:00.745944 containerd[1446]: 2024-09-04 17:42:00.715 [INFO][4487] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" HandleID="k8s-pod-network.602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.747031 containerd[1446]: 2024-09-04 17:42:00.719 [INFO][4458] k8s.go 386: Populated endpoint ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Namespace="calico-system" Pod="calico-kube-controllers-77b8956656-5z2q5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0", GenerateName:"calico-kube-controllers-77b8956656-", Namespace:"calico-system", SelfLink:"", UID:"281f014c-f5de-43ce-935e-8fa5520141f6", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b8956656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77b8956656-5z2q5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali523aca75cd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:00.747031 containerd[1446]: 2024-09-04 17:42:00.719 [INFO][4458] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Namespace="calico-system" Pod="calico-kube-controllers-77b8956656-5z2q5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.747031 containerd[1446]: 2024-09-04 17:42:00.719 [INFO][4458] dataplane_linux.go 68: Setting the host side veth name to cali523aca75cd6 ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Namespace="calico-system" Pod="calico-kube-controllers-77b8956656-5z2q5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.747031 containerd[1446]: 2024-09-04 17:42:00.724 [INFO][4458] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Namespace="calico-system" Pod="calico-kube-controllers-77b8956656-5z2q5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.747031 containerd[1446]: 2024-09-04 17:42:00.725 [INFO][4458] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Namespace="calico-system" Pod="calico-kube-controllers-77b8956656-5z2q5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0", GenerateName:"calico-kube-controllers-77b8956656-", Namespace:"calico-system", SelfLink:"", UID:"281f014c-f5de-43ce-935e-8fa5520141f6", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b8956656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad", Pod:"calico-kube-controllers-77b8956656-5z2q5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali523aca75cd6", MAC:"4e:21:64:93:fb:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:00.747031 containerd[1446]: 2024-09-04 17:42:00.739 [INFO][4458] k8s.go 500: Wrote updated endpoint to datastore ContainerID="602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad" Namespace="calico-system" Pod="calico-kube-controllers-77b8956656-5z2q5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:00.755544 containerd[1446]: time="2024-09-04T17:42:00.754686250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:00.755544 containerd[1446]: time="2024-09-04T17:42:00.754767903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:00.755544 containerd[1446]: time="2024-09-04T17:42:00.754782491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:00.756443 containerd[1446]: time="2024-09-04T17:42:00.755253495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:00.773608 sshd[4450]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:00.782157 systemd[1]: Started cri-containerd-90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da.scope - libcontainer container 90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da. Sep 4 17:42:00.783442 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:44102.service: Deactivated successfully. Sep 4 17:42:00.792154 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:42:00.792348 containerd[1446]: time="2024-09-04T17:42:00.792065000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:00.793904 containerd[1446]: time="2024-09-04T17:42:00.792740107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:00.794468 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:42:00.795942 containerd[1446]: time="2024-09-04T17:42:00.794579391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:00.795942 containerd[1446]: time="2024-09-04T17:42:00.794873162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:00.818006 systemd-logind[1427]: Removed session 14. Sep 4 17:42:00.821702 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:42:00.825521 systemd[1]: Started cri-containerd-602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad.scope - libcontainer container 602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad. Sep 4 17:42:00.842363 containerd[1446]: time="2024-09-04T17:42:00.841952416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fjlxj,Uid:3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5,Namespace:calico-system,Attempt:1,} returns sandbox id \"90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da\"" Sep 4 17:42:00.843623 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:42:00.846377 containerd[1446]: time="2024-09-04T17:42:00.846053345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:42:00.872233 containerd[1446]: time="2024-09-04T17:42:00.872190248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b8956656-5z2q5,Uid:281f014c-f5de-43ce-935e-8fa5520141f6,Namespace:calico-system,Attempt:1,} returns sandbox id \"602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad\"" Sep 4 17:42:01.233456 systemd-networkd[1375]: cali973e76f36d8: Gained IPv6LL Sep 4 17:42:01.620628 kubelet[2565]: E0904 17:42:01.620580 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:42:02.065505 systemd-networkd[1375]: cali523aca75cd6: Gained IPv6LL Sep 4 17:42:02.250048 containerd[1446]: time="2024-09-04T17:42:02.249994906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:02.250705 containerd[1446]: time="2024-09-04T17:42:02.250659313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:42:02.251966 containerd[1446]: time="2024-09-04T17:42:02.251938789Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:02.255793 containerd[1446]: time="2024-09-04T17:42:02.254935952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:02.255793 containerd[1446]: time="2024-09-04T17:42:02.255715611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.409617361s" Sep 4 17:42:02.255793 containerd[1446]: time="2024-09-04T17:42:02.255741070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:42:02.256680 containerd[1446]: time="2024-09-04T17:42:02.256657575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:42:02.258446 containerd[1446]: time="2024-09-04T17:42:02.258414186Z" level=info msg="CreateContainer within sandbox \"90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:42:02.304005 containerd[1446]: time="2024-09-04T17:42:02.303942553Z" level=info msg="CreateContainer within sandbox \"90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"03c86217d98eacbcf4d397a6cf7420a0adde0638f1a06d7d5daf9ab237d36473\"" Sep 4 17:42:02.304778 containerd[1446]: time="2024-09-04T17:42:02.304503790Z" level=info msg="StartContainer for \"03c86217d98eacbcf4d397a6cf7420a0adde0638f1a06d7d5daf9ab237d36473\"" Sep 4 17:42:02.338589 systemd[1]: Started cri-containerd-03c86217d98eacbcf4d397a6cf7420a0adde0638f1a06d7d5daf9ab237d36473.scope - libcontainer container 03c86217d98eacbcf4d397a6cf7420a0adde0638f1a06d7d5daf9ab237d36473. Sep 4 17:42:02.371770 containerd[1446]: time="2024-09-04T17:42:02.371712257Z" level=info msg="StartContainer for \"03c86217d98eacbcf4d397a6cf7420a0adde0638f1a06d7d5daf9ab237d36473\" returns successfully" Sep 4 17:42:02.705456 systemd-networkd[1375]: cali988d9741bce: Gained IPv6LL Sep 4 17:42:04.049175 containerd[1446]: time="2024-09-04T17:42:04.049109983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:04.049865 containerd[1446]: time="2024-09-04T17:42:04.049832960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:42:04.050968 containerd[1446]: time="2024-09-04T17:42:04.050941272Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:04.053102 containerd[1446]: time="2024-09-04T17:42:04.053059616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:04.053581 containerd[1446]: time="2024-09-04T17:42:04.053555204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 1.796864646s" Sep 4 17:42:04.053634 containerd[1446]: time="2024-09-04T17:42:04.053586855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:42:04.054382 containerd[1446]: time="2024-09-04T17:42:04.054172627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:42:04.062128 containerd[1446]: time="2024-09-04T17:42:04.062074249Z" level=info msg="CreateContainer within sandbox \"602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:42:04.078535 containerd[1446]: time="2024-09-04T17:42:04.078473652Z" level=info msg="CreateContainer within sandbox \"602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cb5a3ed6274d5d79e178e0d71bf420ccf46f574f1a9f34fc0c343e3c25bf98d3\"" Sep 4 17:42:04.078964 containerd[1446]: time="2024-09-04T17:42:04.078925727Z" level=info msg="StartContainer for \"cb5a3ed6274d5d79e178e0d71bf420ccf46f574f1a9f34fc0c343e3c25bf98d3\"" Sep 4 17:42:04.106460 systemd[1]: Started cri-containerd-cb5a3ed6274d5d79e178e0d71bf420ccf46f574f1a9f34fc0c343e3c25bf98d3.scope - libcontainer container cb5a3ed6274d5d79e178e0d71bf420ccf46f574f1a9f34fc0c343e3c25bf98d3. Sep 4 17:42:04.148163 containerd[1446]: time="2024-09-04T17:42:04.148124313Z" level=info msg="StartContainer for \"cb5a3ed6274d5d79e178e0d71bf420ccf46f574f1a9f34fc0c343e3c25bf98d3\" returns successfully" Sep 4 17:42:04.639811 kubelet[2565]: I0904 17:42:04.639759 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77b8956656-5z2q5" podStartSLOduration=33.458898992 podStartE2EDuration="36.639197926s" podCreationTimestamp="2024-09-04 17:41:28 +0000 UTC" firstStartedPulling="2024-09-04 17:42:00.873619492 +0000 UTC m=+53.583964822" lastFinishedPulling="2024-09-04 17:42:04.053918436 +0000 UTC m=+56.764263756" observedRunningTime="2024-09-04 17:42:04.638141694 +0000 UTC m=+57.348487024" watchObservedRunningTime="2024-09-04 17:42:04.639197926 +0000 UTC m=+57.349543257" Sep 4 17:42:05.402344 containerd[1446]: time="2024-09-04T17:42:05.402272282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:05.403276 containerd[1446]: time="2024-09-04T17:42:05.403223520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:42:05.404528 containerd[1446]: time="2024-09-04T17:42:05.404463914Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:05.406434 containerd[1446]: time="2024-09-04T17:42:05.406393340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:05.407067 containerd[1446]: time="2024-09-04T17:42:05.407031393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.352828838s" Sep 4 17:42:05.407067 containerd[1446]: time="2024-09-04T17:42:05.407060568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:42:05.408753 containerd[1446]: time="2024-09-04T17:42:05.408724783Z" level=info msg="CreateContainer within sandbox \"90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:42:05.424205 containerd[1446]: time="2024-09-04T17:42:05.424163677Z" level=info msg="CreateContainer within sandbox \"90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"36c1f738c007fa3494db5d434c94d5f65498187529d3dd97c4c9c7a0047d19f8\"" Sep 4 17:42:05.424604 containerd[1446]: time="2024-09-04T17:42:05.424583207Z" level=info msg="StartContainer for \"36c1f738c007fa3494db5d434c94d5f65498187529d3dd97c4c9c7a0047d19f8\"" Sep 4 17:42:05.454443 systemd[1]: Started cri-containerd-36c1f738c007fa3494db5d434c94d5f65498187529d3dd97c4c9c7a0047d19f8.scope - libcontainer container 36c1f738c007fa3494db5d434c94d5f65498187529d3dd97c4c9c7a0047d19f8. Sep 4 17:42:05.484679 containerd[1446]: time="2024-09-04T17:42:05.484636376Z" level=info msg="StartContainer for \"36c1f738c007fa3494db5d434c94d5f65498187529d3dd97c4c9c7a0047d19f8\" returns successfully" Sep 4 17:42:05.543623 kubelet[2565]: I0904 17:42:05.543586 2565 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:42:05.544782 kubelet[2565]: I0904 17:42:05.544755 2565 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:42:05.642566 kubelet[2565]: I0904 17:42:05.641755 2565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-fjlxj" podStartSLOduration=34.07899598 podStartE2EDuration="38.641699462s" podCreationTimestamp="2024-09-04 17:41:27 +0000 UTC" firstStartedPulling="2024-09-04 17:42:00.844597402 +0000 UTC m=+53.554942732" lastFinishedPulling="2024-09-04 17:42:05.407300883 +0000 UTC m=+58.117646214" observedRunningTime="2024-09-04 17:42:05.641150612 +0000 UTC m=+58.351495952" watchObservedRunningTime="2024-09-04 17:42:05.641699462 +0000 UTC m=+58.352044802" Sep 4 17:42:05.794665 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:44114.service - OpenSSH per-connection server daemon (10.0.0.1:44114). Sep 4 17:42:05.838843 sshd[4783]: Accepted publickey for core from 10.0.0.1 port 44114 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:05.840628 sshd[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:05.845022 systemd-logind[1427]: New session 15 of user core. Sep 4 17:42:05.850427 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:42:05.975053 sshd[4783]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:05.979335 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:44114.service: Deactivated successfully. Sep 4 17:42:05.981510 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:42:05.982215 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:42:05.983147 systemd-logind[1427]: Removed session 15. Sep 4 17:42:07.406998 containerd[1446]: time="2024-09-04T17:42:07.406513974Z" level=info msg="StopPodSandbox for \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\"" Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.453 [WARNING][4813] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0", GenerateName:"calico-kube-controllers-77b8956656-", Namespace:"calico-system", SelfLink:"", UID:"281f014c-f5de-43ce-935e-8fa5520141f6", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b8956656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad", Pod:"calico-kube-controllers-77b8956656-5z2q5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali523aca75cd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.453 [INFO][4813] k8s.go 608: Cleaning up netns ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.453 [INFO][4813] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" iface="eth0" netns="" Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.454 [INFO][4813] k8s.go 615: Releasing IP address(es) ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.454 [INFO][4813] utils.go 188: Calico CNI releasing IP address ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.480 [INFO][4823] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" HandleID="k8s-pod-network.ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.480 [INFO][4823] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.480 [INFO][4823] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.485 [WARNING][4823] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" HandleID="k8s-pod-network.ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.485 [INFO][4823] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" HandleID="k8s-pod-network.ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.486 [INFO][4823] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:07.491942 containerd[1446]: 2024-09-04 17:42:07.488 [INFO][4813] k8s.go 621: Teardown processing complete. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:07.492853 containerd[1446]: time="2024-09-04T17:42:07.491997792Z" level=info msg="TearDown network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\" successfully" Sep 4 17:42:07.492853 containerd[1446]: time="2024-09-04T17:42:07.492023972Z" level=info msg="StopPodSandbox for \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\" returns successfully" Sep 4 17:42:07.498316 containerd[1446]: time="2024-09-04T17:42:07.498275486Z" level=info msg="RemovePodSandbox for \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\"" Sep 4 17:42:07.501455 containerd[1446]: time="2024-09-04T17:42:07.501423286Z" level=info msg="Forcibly stopping sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\"" Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.536 [WARNING][4845] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0", GenerateName:"calico-kube-controllers-77b8956656-", Namespace:"calico-system", SelfLink:"", UID:"281f014c-f5de-43ce-935e-8fa5520141f6", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b8956656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"602324a916aeda798bff1a39266e4fac3e25cd2ad87526a3c4cfc28de46494ad", Pod:"calico-kube-controllers-77b8956656-5z2q5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali523aca75cd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.537 [INFO][4845] k8s.go 608: Cleaning up netns ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.537 [INFO][4845] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" iface="eth0" netns="" Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.537 [INFO][4845] k8s.go 615: Releasing IP address(es) ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.537 [INFO][4845] utils.go 188: Calico CNI releasing IP address ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.559 [INFO][4852] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" HandleID="k8s-pod-network.ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.559 [INFO][4852] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.559 [INFO][4852] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.564 [WARNING][4852] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" HandleID="k8s-pod-network.ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.565 [INFO][4852] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" HandleID="k8s-pod-network.ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Workload="localhost-k8s-calico--kube--controllers--77b8956656--5z2q5-eth0" Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.566 [INFO][4852] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:07.572516 containerd[1446]: 2024-09-04 17:42:07.569 [INFO][4845] k8s.go 621: Teardown processing complete. ContainerID="ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592" Sep 4 17:42:07.572971 containerd[1446]: time="2024-09-04T17:42:07.572571462Z" level=info msg="TearDown network for sandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\" successfully" Sep 4 17:42:08.005822 containerd[1446]: time="2024-09-04T17:42:08.005742693Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:42:08.006101 containerd[1446]: time="2024-09-04T17:42:08.005855049Z" level=info msg="RemovePodSandbox \"ee1da9236c522b6a7a650ac86bf6ac0b8c52506e30cff89c51aa3df130fde592\" returns successfully" Sep 4 17:42:08.006637 containerd[1446]: time="2024-09-04T17:42:08.006592370Z" level=info msg="StopPodSandbox for \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\"" Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.057 [WARNING][4877] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4kpcw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"912e9b42-ab77-443f-87ef-4e4df49a2075", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952", Pod:"coredns-76f75df574-4kpcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali973e76f36d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.057 [INFO][4877] k8s.go 608: Cleaning up netns ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.057 [INFO][4877] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" iface="eth0" netns="" Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.057 [INFO][4877] k8s.go 615: Releasing IP address(es) ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.057 [INFO][4877] utils.go 188: Calico CNI releasing IP address ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.079 [INFO][4884] ipam_plugin.go 417: Releasing address using handleID ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" HandleID="k8s-pod-network.b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.079 [INFO][4884] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.079 [INFO][4884] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.084 [WARNING][4884] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" HandleID="k8s-pod-network.b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.084 [INFO][4884] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" HandleID="k8s-pod-network.b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.086 [INFO][4884] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:08.092147 containerd[1446]: 2024-09-04 17:42:08.089 [INFO][4877] k8s.go 621: Teardown processing complete. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:42:08.092634 containerd[1446]: time="2024-09-04T17:42:08.092183925Z" level=info msg="TearDown network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\" successfully" Sep 4 17:42:08.092634 containerd[1446]: time="2024-09-04T17:42:08.092211168Z" level=info msg="StopPodSandbox for \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\" returns successfully" Sep 4 17:42:08.092843 containerd[1446]: time="2024-09-04T17:42:08.092809481Z" level=info msg="RemovePodSandbox for \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\"" Sep 4 17:42:08.092843 containerd[1446]: time="2024-09-04T17:42:08.092838526Z" level=info msg="Forcibly stopping sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\"" Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.147 [WARNING][4906] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4kpcw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"912e9b42-ab77-443f-87ef-4e4df49a2075", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e0d2c5a99cf6f3fbded92a865805267d830daa044869e856513760b8ae60952", Pod:"coredns-76f75df574-4kpcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali973e76f36d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.147 [INFO][4906] k8s.go 608: Cleaning up netns ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.147 [INFO][4906] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" iface="eth0" netns="" Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.147 [INFO][4906] k8s.go 615: Releasing IP address(es) ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.147 [INFO][4906] utils.go 188: Calico CNI releasing IP address ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.168 [INFO][4913] ipam_plugin.go 417: Releasing address using handleID ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" HandleID="k8s-pod-network.b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.168 [INFO][4913] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.168 [INFO][4913] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.173 [WARNING][4913] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" HandleID="k8s-pod-network.b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.173 [INFO][4913] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" HandleID="k8s-pod-network.b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Workload="localhost-k8s-coredns--76f75df574--4kpcw-eth0" Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.175 [INFO][4913] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:08.180815 containerd[1446]: 2024-09-04 17:42:08.177 [INFO][4906] k8s.go 621: Teardown processing complete. ContainerID="b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14" Sep 4 17:42:08.181267 containerd[1446]: time="2024-09-04T17:42:08.180865644Z" level=info msg="TearDown network for sandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\" successfully" Sep 4 17:42:08.464101 containerd[1446]: time="2024-09-04T17:42:08.464034225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:42:08.464671 containerd[1446]: time="2024-09-04T17:42:08.464119329Z" level=info msg="RemovePodSandbox \"b6247e5f6c69e4bad02fdaa50df0359c6b7aa9fba217b2aabcfb84a715b64b14\" returns successfully" Sep 4 17:42:08.464734 containerd[1446]: time="2024-09-04T17:42:08.464699637Z" level=info msg="StopPodSandbox for \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\"" Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.528 [WARNING][4935] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fjlxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da", Pod:"csi-node-driver-fjlxj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali988d9741bce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.529 [INFO][4935] k8s.go 608: Cleaning up netns ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.529 [INFO][4935] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" iface="eth0" netns="" Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.529 [INFO][4935] k8s.go 615: Releasing IP address(es) ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.529 [INFO][4935] utils.go 188: Calico CNI releasing IP address ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.550 [INFO][4942] ipam_plugin.go 417: Releasing address using handleID ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" HandleID="k8s-pod-network.dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.550 [INFO][4942] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.550 [INFO][4942] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.555 [WARNING][4942] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" HandleID="k8s-pod-network.dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.555 [INFO][4942] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" HandleID="k8s-pod-network.dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.556 [INFO][4942] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:08.561792 containerd[1446]: 2024-09-04 17:42:08.559 [INFO][4935] k8s.go 621: Teardown processing complete. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:08.562363 containerd[1446]: time="2024-09-04T17:42:08.561830717Z" level=info msg="TearDown network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\" successfully" Sep 4 17:42:08.562363 containerd[1446]: time="2024-09-04T17:42:08.561859081Z" level=info msg="StopPodSandbox for \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\" returns successfully" Sep 4 17:42:08.562526 containerd[1446]: time="2024-09-04T17:42:08.562481500Z" level=info msg="RemovePodSandbox for \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\"" Sep 4 17:42:08.562526 containerd[1446]: time="2024-09-04T17:42:08.562520576Z" level=info msg="Forcibly stopping sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\"" Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.614 [WARNING][4965] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fjlxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e3dbe29-0d6e-49c3-ba6a-a1f8d88c0ba5", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90740f76b66351fff45c62a96ab96281feab265f055740fca4b5bad5e60e41da", Pod:"csi-node-driver-fjlxj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali988d9741bce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.614 [INFO][4965] k8s.go 608: Cleaning up netns ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.614 [INFO][4965] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" iface="eth0" netns="" Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.614 [INFO][4965] k8s.go 615: Releasing IP address(es) ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.614 [INFO][4965] utils.go 188: Calico CNI releasing IP address ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.635 [INFO][4973] ipam_plugin.go 417: Releasing address using handleID ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" HandleID="k8s-pod-network.dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.635 [INFO][4973] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.635 [INFO][4973] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.640 [WARNING][4973] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" HandleID="k8s-pod-network.dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.640 [INFO][4973] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" HandleID="k8s-pod-network.dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Workload="localhost-k8s-csi--node--driver--fjlxj-eth0" Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.641 [INFO][4973] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:08.648204 containerd[1446]: 2024-09-04 17:42:08.645 [INFO][4965] k8s.go 621: Teardown processing complete. ContainerID="dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da" Sep 4 17:42:08.648643 containerd[1446]: time="2024-09-04T17:42:08.648220269Z" level=info msg="TearDown network for sandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\" successfully" Sep 4 17:42:08.679508 containerd[1446]: time="2024-09-04T17:42:08.679466835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:42:08.679589 containerd[1446]: time="2024-09-04T17:42:08.679551568Z" level=info msg="RemovePodSandbox \"dcee68978196ec1042e497332950f6ececfba25296d00511ce200b039c35f8da\" returns successfully" Sep 4 17:42:08.679964 containerd[1446]: time="2024-09-04T17:42:08.679943973Z" level=info msg="StopPodSandbox for \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\"" Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.719 [WARNING][4995] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--6s9q5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69449cbf-db23-4a42-849e-f85741ba3407", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25", Pod:"coredns-76f75df574-6s9q5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b3ce2a3f70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.719 [INFO][4995] k8s.go 608: Cleaning up netns ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.719 [INFO][4995] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" iface="eth0" netns="" Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.719 [INFO][4995] k8s.go 615: Releasing IP address(es) ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.719 [INFO][4995] utils.go 188: Calico CNI releasing IP address ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.744 [INFO][5003] ipam_plugin.go 417: Releasing address using handleID ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" HandleID="k8s-pod-network.84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.744 [INFO][5003] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.744 [INFO][5003] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.749 [WARNING][5003] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" HandleID="k8s-pod-network.84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.749 [INFO][5003] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" HandleID="k8s-pod-network.84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.751 [INFO][5003] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:08.757395 containerd[1446]: 2024-09-04 17:42:08.754 [INFO][4995] k8s.go 621: Teardown processing complete. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:42:08.757395 containerd[1446]: time="2024-09-04T17:42:08.757340248Z" level=info msg="TearDown network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\" successfully" Sep 4 17:42:08.757395 containerd[1446]: time="2024-09-04T17:42:08.757371409Z" level=info msg="StopPodSandbox for \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\" returns successfully" Sep 4 17:42:08.758062 containerd[1446]: time="2024-09-04T17:42:08.757995501Z" level=info msg="RemovePodSandbox for \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\"" Sep 4 17:42:08.758062 containerd[1446]: time="2024-09-04T17:42:08.758036840Z" level=info msg="Forcibly stopping sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\"" Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.794 [WARNING][5026] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--6s9q5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69449cbf-db23-4a42-849e-f85741ba3407", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04c03172b432d0505a5710f388dc821b038796e74ab7f86a610e8184918d3c25", Pod:"coredns-76f75df574-6s9q5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b3ce2a3f70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.794 [INFO][5026] k8s.go 608: Cleaning up netns ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.794 [INFO][5026] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" iface="eth0" netns="" Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.794 [INFO][5026] k8s.go 615: Releasing IP address(es) ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.795 [INFO][5026] utils.go 188: Calico CNI releasing IP address ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.820 [INFO][5035] ipam_plugin.go 417: Releasing address using handleID ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" HandleID="k8s-pod-network.84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.820 [INFO][5035] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.820 [INFO][5035] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.825 [WARNING][5035] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" HandleID="k8s-pod-network.84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.825 [INFO][5035] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" HandleID="k8s-pod-network.84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Workload="localhost-k8s-coredns--76f75df574--6s9q5-eth0" Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.826 [INFO][5035] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:08.832247 containerd[1446]: 2024-09-04 17:42:08.829 [INFO][5026] k8s.go 621: Teardown processing complete. ContainerID="84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8" Sep 4 17:42:08.832866 containerd[1446]: time="2024-09-04T17:42:08.832794251Z" level=info msg="TearDown network for sandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\" successfully" Sep 4 17:42:08.857579 containerd[1446]: time="2024-09-04T17:42:08.857518008Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:42:08.857683 containerd[1446]: time="2024-09-04T17:42:08.857659080Z" level=info msg="RemovePodSandbox \"84dd55b7d1416d9c2cfcd661101464d210eaf6a2f8ddbd07b11cbd8564fed3a8\" returns successfully" Sep 4 17:42:10.988653 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:42226.service - OpenSSH per-connection server daemon (10.0.0.1:42226). Sep 4 17:42:11.035769 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 42226 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:11.037408 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:11.041753 systemd-logind[1427]: New session 16 of user core. Sep 4 17:42:11.057440 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:42:11.186422 sshd[5045]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:11.194813 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:42226.service: Deactivated successfully. Sep 4 17:42:11.197009 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:42:11.198746 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:42:11.207671 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:42234.service - OpenSSH per-connection server daemon (10.0.0.1:42234). Sep 4 17:42:11.208758 systemd-logind[1427]: Removed session 16. Sep 4 17:42:11.245423 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 42234 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:11.247352 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:11.252152 systemd-logind[1427]: New session 17 of user core. Sep 4 17:42:11.262475 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:42:11.534744 sshd[5059]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:11.542662 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:42234.service: Deactivated successfully. Sep 4 17:42:11.544864 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:42:11.546737 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:42:11.554758 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:42238.service - OpenSSH per-connection server daemon (10.0.0.1:42238). Sep 4 17:42:11.555821 systemd-logind[1427]: Removed session 17. Sep 4 17:42:11.593556 sshd[5071]: Accepted publickey for core from 10.0.0.1 port 42238 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:11.595151 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:11.599633 systemd-logind[1427]: New session 18 of user core. Sep 4 17:42:11.611453 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:42:13.121658 sshd[5071]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:13.133072 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:42238.service: Deactivated successfully. Sep 4 17:42:13.135294 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:42:13.136481 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:42:13.139517 systemd-logind[1427]: Removed session 18. Sep 4 17:42:13.148721 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:42242.service - OpenSSH per-connection server daemon (10.0.0.1:42242). Sep 4 17:42:13.190732 sshd[5099]: Accepted publickey for core from 10.0.0.1 port 42242 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:13.192939 sshd[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:13.198330 systemd-logind[1427]: New session 19 of user core. Sep 4 17:42:13.209551 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:42:13.447098 sshd[5099]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:13.457739 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:42242.service: Deactivated successfully. Sep 4 17:42:13.460554 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:42:13.462812 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:42:13.473850 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:42246.service - OpenSSH per-connection server daemon (10.0.0.1:42246). Sep 4 17:42:13.475082 systemd-logind[1427]: Removed session 19. Sep 4 17:42:13.509170 sshd[5111]: Accepted publickey for core from 10.0.0.1 port 42246 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:13.510870 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:13.515267 systemd-logind[1427]: New session 20 of user core. Sep 4 17:42:13.523577 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:42:13.639817 sshd[5111]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:13.644581 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:42246.service: Deactivated successfully. Sep 4 17:42:13.646940 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:42:13.647695 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:42:13.649026 systemd-logind[1427]: Removed session 20. Sep 4 17:42:18.651430 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:53730.service - OpenSSH per-connection server daemon (10.0.0.1:53730). Sep 4 17:42:18.689409 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 53730 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:18.691076 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:18.695129 systemd-logind[1427]: New session 21 of user core. Sep 4 17:42:18.705441 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:42:18.815582 sshd[5152]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:18.819282 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:53730.service: Deactivated successfully. Sep 4 17:42:18.821227 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:42:18.821850 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:42:18.822785 systemd-logind[1427]: Removed session 21. Sep 4 17:42:22.415287 kubelet[2565]: E0904 17:42:22.415226 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:42:23.827610 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:53736.service - OpenSSH per-connection server daemon (10.0.0.1:53736). Sep 4 17:42:23.864665 sshd[5168]: Accepted publickey for core from 10.0.0.1 port 53736 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:23.866250 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:23.870222 systemd-logind[1427]: New session 22 of user core. Sep 4 17:42:23.877428 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:42:23.984257 sshd[5168]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:23.988686 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:53736.service: Deactivated successfully. Sep 4 17:42:23.990987 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:42:23.991716 systemd-logind[1427]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:42:23.992716 systemd-logind[1427]: Removed session 22. Sep 4 17:42:27.639491 kubelet[2565]: I0904 17:42:27.639429 2565 topology_manager.go:215] "Topology Admit Handler" podUID="c353cf47-c521-4600-ad85-77fe85105a03" podNamespace="calico-apiserver" podName="calico-apiserver-54fb67db65-l56pl" Sep 4 17:42:27.651349 systemd[1]: Created slice kubepods-besteffort-podc353cf47_c521_4600_ad85_77fe85105a03.slice - libcontainer container kubepods-besteffort-podc353cf47_c521_4600_ad85_77fe85105a03.slice. Sep 4 17:42:27.791667 kubelet[2565]: I0904 17:42:27.791590 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wdr\" (UniqueName: \"kubernetes.io/projected/c353cf47-c521-4600-ad85-77fe85105a03-kube-api-access-d9wdr\") pod \"calico-apiserver-54fb67db65-l56pl\" (UID: \"c353cf47-c521-4600-ad85-77fe85105a03\") " pod="calico-apiserver/calico-apiserver-54fb67db65-l56pl" Sep 4 17:42:27.791667 kubelet[2565]: I0904 17:42:27.791669 2565 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c353cf47-c521-4600-ad85-77fe85105a03-calico-apiserver-certs\") pod \"calico-apiserver-54fb67db65-l56pl\" (UID: \"c353cf47-c521-4600-ad85-77fe85105a03\") " pod="calico-apiserver/calico-apiserver-54fb67db65-l56pl" Sep 4 17:42:27.892423 kubelet[2565]: E0904 17:42:27.892112 2565 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:42:27.892423 kubelet[2565]: E0904 17:42:27.892213 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c353cf47-c521-4600-ad85-77fe85105a03-calico-apiserver-certs podName:c353cf47-c521-4600-ad85-77fe85105a03 nodeName:}" failed. No retries permitted until 2024-09-04 17:42:28.392194933 +0000 UTC m=+81.102540263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/c353cf47-c521-4600-ad85-77fe85105a03-calico-apiserver-certs") pod "calico-apiserver-54fb67db65-l56pl" (UID: "c353cf47-c521-4600-ad85-77fe85105a03") : secret "calico-apiserver-certs" not found Sep 4 17:42:28.556455 containerd[1446]: time="2024-09-04T17:42:28.556410158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fb67db65-l56pl,Uid:c353cf47-c521-4600-ad85-77fe85105a03,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:42:28.668747 systemd-networkd[1375]: cali3bd7abbe092: Link UP Sep 4 17:42:28.669809 systemd-networkd[1375]: cali3bd7abbe092: Gained carrier Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.604 [INFO][5219] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0 calico-apiserver-54fb67db65- calico-apiserver c353cf47-c521-4600-ad85-77fe85105a03 1096 0 2024-09-04 17:42:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54fb67db65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54fb67db65-l56pl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3bd7abbe092 [] []}} ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Namespace="calico-apiserver" Pod="calico-apiserver-54fb67db65-l56pl" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb67db65--l56pl-" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.604 [INFO][5219] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Namespace="calico-apiserver" Pod="calico-apiserver-54fb67db65-l56pl" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.630 [INFO][5234] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" HandleID="k8s-pod-network.12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Workload="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.637 [INFO][5234] ipam_plugin.go 270: Auto assigning IP ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" HandleID="k8s-pod-network.12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Workload="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f60d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54fb67db65-l56pl", "timestamp":"2024-09-04 17:42:28.630199437 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.637 [INFO][5234] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.637 [INFO][5234] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.638 [INFO][5234] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.639 [INFO][5234] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" host="localhost" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.643 [INFO][5234] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.646 [INFO][5234] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.648 [INFO][5234] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.649 [INFO][5234] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.650 [INFO][5234] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" host="localhost" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.651 [INFO][5234] ipam.go 1685: Creating new handle: k8s-pod-network.12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.655 [INFO][5234] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" host="localhost" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.661 [INFO][5234] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" host="localhost" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.661 [INFO][5234] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" host="localhost" Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.661 [INFO][5234] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:28.682971 containerd[1446]: 2024-09-04 17:42:28.661 [INFO][5234] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" HandleID="k8s-pod-network.12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Workload="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" Sep 4 17:42:28.683791 containerd[1446]: 2024-09-04 17:42:28.664 [INFO][5219] k8s.go 386: Populated endpoint ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Namespace="calico-apiserver" Pod="calico-apiserver-54fb67db65-l56pl" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0", GenerateName:"calico-apiserver-54fb67db65-", Namespace:"calico-apiserver", SelfLink:"", UID:"c353cf47-c521-4600-ad85-77fe85105a03", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54fb67db65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54fb67db65-l56pl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bd7abbe092", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:28.683791 containerd[1446]: 2024-09-04 17:42:28.665 [INFO][5219] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Namespace="calico-apiserver" Pod="calico-apiserver-54fb67db65-l56pl" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" Sep 4 17:42:28.683791 containerd[1446]: 2024-09-04 17:42:28.665 [INFO][5219] dataplane_linux.go 68: Setting the host side veth name to cali3bd7abbe092 ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Namespace="calico-apiserver" Pod="calico-apiserver-54fb67db65-l56pl" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" Sep 4 17:42:28.683791 containerd[1446]: 2024-09-04 17:42:28.670 [INFO][5219] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Namespace="calico-apiserver" Pod="calico-apiserver-54fb67db65-l56pl" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" Sep 4 17:42:28.683791 containerd[1446]: 2024-09-04 17:42:28.671 [INFO][5219] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Namespace="calico-apiserver" Pod="calico-apiserver-54fb67db65-l56pl" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0", GenerateName:"calico-apiserver-54fb67db65-", Namespace:"calico-apiserver", SelfLink:"", UID:"c353cf47-c521-4600-ad85-77fe85105a03", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54fb67db65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a", Pod:"calico-apiserver-54fb67db65-l56pl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bd7abbe092", MAC:"82:b7:70:25:34:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:28.683791 containerd[1446]: 2024-09-04 17:42:28.677 [INFO][5219] k8s.go 500: Wrote updated endpoint to datastore ContainerID="12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a" Namespace="calico-apiserver" Pod="calico-apiserver-54fb67db65-l56pl" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb67db65--l56pl-eth0" Sep 4 17:42:28.710593 containerd[1446]: time="2024-09-04T17:42:28.710474430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:28.710593 containerd[1446]: time="2024-09-04T17:42:28.710549123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:28.710807 containerd[1446]: time="2024-09-04T17:42:28.710567759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:28.710807 containerd[1446]: time="2024-09-04T17:42:28.710676055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:28.739482 systemd[1]: Started cri-containerd-12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a.scope - libcontainer container 12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a. Sep 4 17:42:28.754997 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:42:28.782220 containerd[1446]: time="2024-09-04T17:42:28.782173464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fb67db65-l56pl,Uid:c353cf47-c521-4600-ad85-77fe85105a03,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a\"" Sep 4 17:42:28.783579 containerd[1446]: time="2024-09-04T17:42:28.783552403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:42:28.998176 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:44682.service - OpenSSH per-connection server daemon (10.0.0.1:44682). Sep 4 17:42:29.041615 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 44682 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:42:29.043602 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:42:29.047561 systemd-logind[1427]: New session 23 of user core. Sep 4 17:42:29.057480 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:42:29.183018 sshd[5297]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:29.187762 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:44682.service: Deactivated successfully. Sep 4 17:42:29.190070 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:42:29.190733 systemd-logind[1427]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:42:29.191802 systemd-logind[1427]: Removed session 23. Sep 4 17:42:30.356447 systemd-networkd[1375]: cali3bd7abbe092: Gained IPv6LL Sep 4 17:42:31.090459 containerd[1446]: time="2024-09-04T17:42:31.090388183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:31.130433 containerd[1446]: time="2024-09-04T17:42:31.130357105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:42:31.173289 containerd[1446]: time="2024-09-04T17:42:31.173221220Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:31.218493 containerd[1446]: time="2024-09-04T17:42:31.218425751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:31.219322 containerd[1446]: time="2024-09-04T17:42:31.219262004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.435673773s" Sep 4 17:42:31.219391 containerd[1446]: time="2024-09-04T17:42:31.219325595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:42:31.221260 containerd[1446]: time="2024-09-04T17:42:31.221226705Z" level=info msg="CreateContainer within sandbox \"12f0478da5e7c62a40509709f48c381f924a358030c0f83c8bab1a03869bd26a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"