Oct 8 19:55:04.917805 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 19:55:04.917833 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:55:04.917847 kernel: BIOS-provided physical RAM map: Oct 8 19:55:04.917855 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 8 19:55:04.917864 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 8 19:55:04.917872 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 8 19:55:04.917883 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 8 19:55:04.917894 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 8 19:55:04.917905 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 8 19:55:04.917916 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 8 19:55:04.917931 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 8 19:55:04.917942 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Oct 8 19:55:04.917953 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Oct 8 19:55:04.917964 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Oct 8 19:55:04.917978 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 8 19:55:04.917991 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 8 19:55:04.918006 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 8 19:55:04.918018 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 8 19:55:04.918029 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 8 19:55:04.918052 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 19:55:04.918065 kernel: NX (Execute Disable) protection: active Oct 8 19:55:04.918076 kernel: APIC: Static calls initialized Oct 8 19:55:04.918088 kernel: efi: EFI v2.7 by EDK II Oct 8 19:55:04.918100 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Oct 8 19:55:04.918111 kernel: SMBIOS 2.8 present. Oct 8 19:55:04.918120 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 8 19:55:04.918129 kernel: Hypervisor detected: KVM Oct 8 19:55:04.918142 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 19:55:04.918152 kernel: kvm-clock: using sched offset of 4007205985 cycles Oct 8 19:55:04.918162 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 19:55:04.918172 kernel: tsc: Detected 2794.748 MHz processor Oct 8 19:55:04.918182 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 19:55:04.918193 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 19:55:04.918202 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 8 19:55:04.918212 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 8 19:55:04.918222 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 19:55:04.918235 kernel: Using GB pages for direct mapping Oct 8 19:55:04.918244 kernel: Secure boot disabled Oct 8 19:55:04.918254 kernel: ACPI: Early table checksum verification disabled Oct 8 19:55:04.918264 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 8 19:55:04.918285 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:55:04.918295 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:04.918306 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:04.918319 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 8 19:55:04.918329 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:04.918340 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:04.918351 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:04.918361 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:04.918371 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 19:55:04.918381 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 8 19:55:04.918394 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 8 19:55:04.918405 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 8 19:55:04.918415 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 8 19:55:04.918425 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 8 19:55:04.918435 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 8 19:55:04.918445 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 8 19:55:04.918455 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 8 19:55:04.918465 kernel: No NUMA configuration found Oct 8 19:55:04.918475 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 8 19:55:04.918504 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 8 19:55:04.918514 kernel: Zone ranges: Oct 8 19:55:04.918525 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 19:55:04.918536 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 8 19:55:04.918545 kernel: Normal empty Oct 8 19:55:04.918555 kernel: Movable zone start for each node Oct 8 19:55:04.918566 kernel: Early memory node ranges Oct 8 19:55:04.918577 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 8 19:55:04.918587 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 8 19:55:04.918597 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 8 19:55:04.918611 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 8 19:55:04.918621 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 8 19:55:04.918631 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 8 19:55:04.918641 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 8 19:55:04.918652 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:55:04.918662 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 8 19:55:04.918672 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 8 19:55:04.918682 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:55:04.918692 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 8 19:55:04.918706 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 8 19:55:04.918716 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 8 19:55:04.918726 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 19:55:04.918737 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 19:55:04.918747 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 19:55:04.918757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 19:55:04.918767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 19:55:04.918778 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 19:55:04.918788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 19:55:04.918798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 19:55:04.918813 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 19:55:04.918825 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 19:55:04.918838 kernel: TSC deadline timer available Oct 8 19:55:04.918851 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 8 19:55:04.918864 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 19:55:04.918876 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 8 19:55:04.918889 kernel: kvm-guest: setup PV sched yield Oct 8 19:55:04.918901 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 8 19:55:04.918914 kernel: Booting paravirtualized kernel on KVM Oct 8 19:55:04.918931 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 19:55:04.918944 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 8 19:55:04.918956 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 8 19:55:04.918966 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 8 19:55:04.918976 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 8 19:55:04.918987 kernel: kvm-guest: PV spinlocks enabled Oct 8 19:55:04.918997 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 19:55:04.919008 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:55:04.919022 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:55:04.919033 kernel: random: crng init done Oct 8 19:55:04.919054 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:55:04.919064 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:55:04.919075 kernel: Fallback order for Node 0: 0 Oct 8 19:55:04.919085 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 8 19:55:04.919095 kernel: Policy zone: DMA32 Oct 8 19:55:04.919105 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:55:04.919116 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 171124K reserved, 0K cma-reserved) Oct 8 19:55:04.919130 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:55:04.919140 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 19:55:04.919150 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 19:55:04.919161 kernel: Dynamic Preempt: voluntary Oct 8 19:55:04.919181 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:55:04.919195 kernel: rcu: RCU event tracing is enabled. Oct 8 19:55:04.919206 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:55:04.919217 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:55:04.919228 kernel: Rude variant of Tasks RCU enabled. Oct 8 19:55:04.919239 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:55:04.919250 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:55:04.919260 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:55:04.919273 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 8 19:55:04.919285 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:55:04.919295 kernel: Console: colour dummy device 80x25 Oct 8 19:55:04.919305 kernel: printk: console [ttyS0] enabled Oct 8 19:55:04.919316 kernel: ACPI: Core revision 20230628 Oct 8 19:55:04.919330 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 19:55:04.919341 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 19:55:04.919352 kernel: x2apic enabled Oct 8 19:55:04.919363 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 19:55:04.919373 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 8 19:55:04.919384 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 8 19:55:04.919394 kernel: kvm-guest: setup PV IPIs Oct 8 19:55:04.919405 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 19:55:04.919416 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 19:55:04.919430 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 8 19:55:04.919441 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 19:55:04.919452 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 19:55:04.919462 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 19:55:04.919473 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 19:55:04.919498 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 19:55:04.919509 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 19:55:04.919519 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 19:55:04.919530 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 19:55:04.919545 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 19:55:04.919556 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 19:55:04.919566 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 19:55:04.919578 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 19:55:04.919590 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 19:55:04.919601 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 19:55:04.919611 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 19:55:04.919622 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 19:55:04.919637 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 19:55:04.919647 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 19:55:04.919658 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 19:55:04.919669 kernel: Freeing SMP alternatives memory: 32K Oct 8 19:55:04.919680 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:55:04.919690 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:55:04.919702 kernel: landlock: Up and running. Oct 8 19:55:04.919712 kernel: SELinux: Initializing. Oct 8 19:55:04.919722 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:55:04.919736 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:55:04.919747 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 19:55:04.919758 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:55:04.919768 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:55:04.919780 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:55:04.919790 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 19:55:04.919801 kernel: ... version: 0 Oct 8 19:55:04.919812 kernel: ... bit width: 48 Oct 8 19:55:04.919822 kernel: ... generic registers: 6 Oct 8 19:55:04.919836 kernel: ... value mask: 0000ffffffffffff Oct 8 19:55:04.919847 kernel: ... max period: 00007fffffffffff Oct 8 19:55:04.919857 kernel: ... fixed-purpose events: 0 Oct 8 19:55:04.919868 kernel: ... event mask: 000000000000003f Oct 8 19:55:04.919878 kernel: signal: max sigframe size: 1776 Oct 8 19:55:04.919889 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:55:04.919902 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:55:04.919915 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:55:04.919928 kernel: smpboot: x86: Booting SMP configuration: Oct 8 19:55:04.919945 kernel: .... node #0, CPUs: #1 #2 #3 Oct 8 19:55:04.919958 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:55:04.919972 kernel: smpboot: Max logical packages: 1 Oct 8 19:55:04.919985 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 8 19:55:04.919998 kernel: devtmpfs: initialized Oct 8 19:55:04.920011 kernel: x86/mm: Memory block size: 128MB Oct 8 19:55:04.920026 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 8 19:55:04.920051 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 8 19:55:04.920066 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 8 19:55:04.920084 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 8 19:55:04.920098 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 8 19:55:04.920112 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:55:04.920124 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:55:04.920135 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:55:04.920146 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:55:04.920157 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:55:04.920167 kernel: audit: type=2000 audit(1728417304.741:1): state=initialized audit_enabled=0 res=1 Oct 8 19:55:04.920178 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:55:04.920193 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 19:55:04.920202 kernel: cpuidle: using governor menu Oct 8 19:55:04.920209 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:55:04.920217 kernel: dca service started, version 1.12.1 Oct 8 19:55:04.920225 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 19:55:04.920232 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 8 19:55:04.920242 kernel: PCI: Using configuration type 1 for base access Oct 8 19:55:04.920252 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 19:55:04.920262 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:55:04.920276 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:55:04.920286 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:55:04.920297 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:55:04.920308 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:55:04.920319 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:55:04.920329 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:55:04.920339 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:55:04.920350 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:55:04.920361 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 19:55:04.920374 kernel: ACPI: Interpreter enabled Oct 8 19:55:04.920382 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 19:55:04.920390 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 19:55:04.920400 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 19:55:04.920411 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 19:55:04.920421 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 19:55:04.920431 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:55:04.920669 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:55:04.920844 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 19:55:04.921025 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 19:55:04.921050 kernel: PCI host bridge to bus 0000:00 Oct 8 19:55:04.921227 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 19:55:04.921376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 19:55:04.921538 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 19:55:04.921683 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 8 19:55:04.921836 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 19:55:04.922010 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 8 19:55:04.922234 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:55:04.922415 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 19:55:04.922612 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 8 19:55:04.922774 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 8 19:55:04.922949 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 8 19:55:04.923149 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 8 19:55:04.923308 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 8 19:55:04.923470 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 19:55:04.923666 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:55:04.923831 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 8 19:55:04.924060 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 8 19:55:04.924199 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 8 19:55:04.924334 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 8 19:55:04.924461 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 8 19:55:04.924602 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 8 19:55:04.924723 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 8 19:55:04.924851 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 19:55:04.924972 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 8 19:55:04.925106 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 8 19:55:04.925266 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 8 19:55:04.925387 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 8 19:55:04.925528 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 19:55:04.925670 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 19:55:04.925797 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 19:55:04.925918 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 8 19:55:04.926083 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 8 19:55:04.926221 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 19:55:04.926344 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 8 19:55:04.926354 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 19:55:04.926362 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 19:55:04.926370 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 19:55:04.926377 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 19:55:04.926388 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 19:55:04.926396 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 19:55:04.926403 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 19:55:04.926411 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 19:55:04.926418 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 19:55:04.926426 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 19:55:04.926433 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 19:55:04.926441 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 19:55:04.926449 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 19:55:04.926458 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 19:55:04.926474 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 19:55:04.926507 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 19:55:04.926528 kernel: iommu: Default domain type: Translated Oct 8 19:55:04.926542 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 19:55:04.926556 kernel: efivars: Registered efivars operations Oct 8 19:55:04.926564 kernel: PCI: Using ACPI for IRQ routing Oct 8 19:55:04.926571 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 19:55:04.926579 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 8 19:55:04.926613 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 8 19:55:04.926620 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 8 19:55:04.926628 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 8 19:55:04.926801 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 19:55:04.926966 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 19:55:04.927098 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 19:55:04.927108 kernel: vgaarb: loaded Oct 8 19:55:04.927116 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 19:55:04.927123 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 19:55:04.927135 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 19:55:04.927142 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:55:04.927150 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:55:04.927158 kernel: pnp: PnP ACPI init Oct 8 19:55:04.927287 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 19:55:04.927299 kernel: pnp: PnP ACPI: found 6 devices Oct 8 19:55:04.927307 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 19:55:04.927314 kernel: NET: Registered PF_INET protocol family Oct 8 19:55:04.927325 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:55:04.927333 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:55:04.927340 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:55:04.927348 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:55:04.927356 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:55:04.927363 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:55:04.927371 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:55:04.927378 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:55:04.927386 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:55:04.927396 kernel: NET: Registered PF_XDP protocol family Oct 8 19:55:04.927532 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 8 19:55:04.927654 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 8 19:55:04.927765 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 19:55:04.927874 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 19:55:04.927982 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 19:55:04.928111 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 8 19:55:04.928220 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 19:55:04.928334 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 8 19:55:04.928344 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:55:04.928352 kernel: Initialise system trusted keyrings Oct 8 19:55:04.928359 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:55:04.928367 kernel: Key type asymmetric registered Oct 8 19:55:04.928374 kernel: Asymmetric key parser 'x509' registered Oct 8 19:55:04.928382 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 19:55:04.928390 kernel: io scheduler mq-deadline registered Oct 8 19:55:04.928398 kernel: io scheduler kyber registered Oct 8 19:55:04.928408 kernel: io scheduler bfq registered Oct 8 19:55:04.928415 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 19:55:04.928423 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 19:55:04.928431 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 19:55:04.928439 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 8 19:55:04.928446 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:55:04.928454 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 19:55:04.928462 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 19:55:04.928469 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 19:55:04.928490 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 19:55:04.928621 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 19:55:04.928736 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 19:55:04.928746 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 19:55:04.928856 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T19:55:04 UTC (1728417304) Oct 8 19:55:04.929026 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 19:55:04.929049 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 19:55:04.929063 kernel: efifb: probing for efifb Oct 8 19:55:04.929073 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Oct 8 19:55:04.929083 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Oct 8 19:55:04.929092 kernel: efifb: scrolling: redraw Oct 8 19:55:04.929101 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Oct 8 19:55:04.929111 kernel: Console: switching to colour frame buffer device 100x37 Oct 8 19:55:04.929141 kernel: fb0: EFI VGA frame buffer device Oct 8 19:55:04.929151 kernel: pstore: Using crash dump compression: deflate Oct 8 19:55:04.929159 kernel: pstore: Registered efi_pstore as persistent store backend Oct 8 19:55:04.929168 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:55:04.929176 kernel: Segment Routing with IPv6 Oct 8 19:55:04.929184 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:55:04.929192 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:55:04.929200 kernel: Key type dns_resolver registered Oct 8 19:55:04.929208 kernel: IPI shorthand broadcast: enabled Oct 8 19:55:04.929216 kernel: sched_clock: Marking stable (634003748, 114018688)->(797896230, -49873794) Oct 8 19:55:04.929223 kernel: registered taskstats version 1 Oct 8 19:55:04.929231 kernel: Loading compiled-in X.509 certificates Oct 8 19:55:04.929239 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 19:55:04.929249 kernel: Key type .fscrypt registered Oct 8 19:55:04.929257 kernel: Key type fscrypt-provisioning registered Oct 8 19:55:04.929265 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:55:04.929273 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:55:04.929281 kernel: ima: No architecture policies found Oct 8 19:55:04.929288 kernel: clk: Disabling unused clocks Oct 8 19:55:04.929296 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 19:55:04.929305 kernel: Write protecting the kernel read-only data: 36864k Oct 8 19:55:04.929314 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 19:55:04.929322 kernel: Run /init as init process Oct 8 19:55:04.929330 kernel: with arguments: Oct 8 19:55:04.929338 kernel: /init Oct 8 19:55:04.929346 kernel: with environment: Oct 8 19:55:04.929353 kernel: HOME=/ Oct 8 19:55:04.929361 kernel: TERM=linux Oct 8 19:55:04.929369 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:55:04.929379 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:55:04.929391 systemd[1]: Detected virtualization kvm. Oct 8 19:55:04.929399 systemd[1]: Detected architecture x86-64. Oct 8 19:55:04.929408 systemd[1]: Running in initrd. Oct 8 19:55:04.929420 systemd[1]: No hostname configured, using default hostname. Oct 8 19:55:04.929430 systemd[1]: Hostname set to . Oct 8 19:55:04.929439 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:55:04.929447 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:55:04.929456 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:55:04.929464 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:55:04.929473 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:55:04.929541 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:55:04.929550 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:55:04.929562 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:55:04.929572 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:55:04.929581 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:55:04.929589 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:55:04.929598 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:55:04.929606 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:55:04.929614 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:55:04.929625 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:55:04.929633 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:55:04.929643 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:55:04.929654 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:55:04.929666 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:55:04.929674 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:55:04.929683 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:55:04.929691 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:55:04.929702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:55:04.929710 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:55:04.929719 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:55:04.929727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:55:04.929735 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:55:04.929744 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:55:04.929752 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:55:04.929760 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:55:04.929769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:04.929779 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:55:04.929788 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:55:04.929796 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:55:04.929805 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:55:04.929836 systemd-journald[193]: Collecting audit messages is disabled. Oct 8 19:55:04.929859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:04.929870 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:55:04.929881 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:55:04.929895 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:55:04.929905 systemd-journald[193]: Journal started Oct 8 19:55:04.929927 systemd-journald[193]: Runtime Journal (/run/log/journal/0fae61f4c7cd4dccb4a08db584f32571) is 6.0M, max 48.3M, 42.2M free. Oct 8 19:55:04.926879 systemd-modules-load[194]: Inserted module 'overlay' Oct 8 19:55:04.931618 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:55:04.935875 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:55:04.939669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:55:04.947855 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:55:04.953691 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:55:04.954452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:55:04.967020 dracut-cmdline[220]: dracut-dracut-053 Oct 8 19:55:04.970895 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:55:04.976071 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:55:04.978147 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 8 19:55:04.979086 kernel: Bridge firewalling registered Oct 8 19:55:04.980493 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:55:04.989658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:55:04.999715 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:55:05.006666 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:55:05.040657 systemd-resolved[265]: Positive Trust Anchors: Oct 8 19:55:05.040679 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:55:05.040723 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:55:05.053190 systemd-resolved[265]: Defaulting to hostname 'linux'. Oct 8 19:55:05.055284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:55:05.055882 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:55:05.072527 kernel: SCSI subsystem initialized Oct 8 19:55:05.084506 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:55:05.097521 kernel: iscsi: registered transport (tcp) Oct 8 19:55:05.122819 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:55:05.122896 kernel: QLogic iSCSI HBA Driver Oct 8 19:55:05.178961 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:55:05.198854 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:55:05.226395 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:55:05.226514 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:55:05.226533 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:55:05.270531 kernel: raid6: avx2x4 gen() 20629 MB/s Oct 8 19:55:05.287510 kernel: raid6: avx2x2 gen() 20566 MB/s Oct 8 19:55:05.304715 kernel: raid6: avx2x1 gen() 20330 MB/s Oct 8 19:55:05.304779 kernel: raid6: using algorithm avx2x4 gen() 20629 MB/s Oct 8 19:55:05.322872 kernel: raid6: .... xor() 7503 MB/s, rmw enabled Oct 8 19:55:05.322952 kernel: raid6: using avx2x2 recovery algorithm Oct 8 19:55:05.343522 kernel: xor: automatically using best checksumming function avx Oct 8 19:55:05.504509 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:55:05.518984 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:55:05.527645 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:55:05.546578 systemd-udevd[411]: Using default interface naming scheme 'v255'. Oct 8 19:55:05.553392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:55:05.560835 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:55:05.579305 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Oct 8 19:55:05.615067 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:55:05.627675 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:55:05.692638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:55:05.703676 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:55:05.718443 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:55:05.723246 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:55:05.724632 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:55:05.730632 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 8 19:55:05.725963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:55:05.737505 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 19:55:05.739684 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:55:05.748610 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:55:05.753901 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:55:05.753943 kernel: GPT:9289727 != 19775487 Oct 8 19:55:05.753958 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:55:05.753972 kernel: GPT:9289727 != 19775487 Oct 8 19:55:05.754857 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:55:05.754882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:55:05.761000 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 19:55:05.761093 kernel: AES CTR mode by8 optimization enabled Oct 8 19:55:05.762371 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:55:05.769234 kernel: libata version 3.00 loaded. Oct 8 19:55:05.772426 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:55:05.773805 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:55:05.782176 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 19:55:05.782362 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 19:55:05.775693 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:55:05.788246 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 19:55:05.788610 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 19:55:05.776026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:55:05.777337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:05.777952 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:05.792931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:05.801695 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (464) Oct 8 19:55:05.801728 kernel: scsi host0: ahci Oct 8 19:55:05.802688 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) Oct 8 19:55:05.807583 kernel: scsi host1: ahci Oct 8 19:55:05.809517 kernel: scsi host2: ahci Oct 8 19:55:05.809758 kernel: scsi host3: ahci Oct 8 19:55:05.811499 kernel: scsi host4: ahci Oct 8 19:55:05.812528 kernel: scsi host5: ahci Oct 8 19:55:05.815519 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 8 19:55:05.815579 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 8 19:55:05.815593 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 8 19:55:05.815606 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 8 19:55:05.816070 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 8 19:55:05.816887 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 8 19:55:05.832496 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:55:05.845038 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:55:05.850998 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:55:05.851733 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:55:05.858318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:55:05.867674 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:55:05.868205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:55:05.868275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:05.871191 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:05.872635 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:05.887302 disk-uuid[558]: Primary Header is updated. Oct 8 19:55:05.887302 disk-uuid[558]: Secondary Entries is updated. Oct 8 19:55:05.887302 disk-uuid[558]: Secondary Header is updated. Oct 8 19:55:05.891530 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:05.895513 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:55:05.902441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:55:05.906693 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:55:05.931458 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:55:06.127507 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 8 19:55:06.127582 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 19:55:06.128497 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 19:55:06.129505 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 19:55:06.130505 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 19:55:06.131655 kernel: ata3.00: applying bridge limits Oct 8 19:55:06.132505 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 19:55:06.132525 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 19:55:06.133508 kernel: ata3.00: configured for UDMA/100 Oct 8 19:55:06.134520 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:55:06.189521 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 19:55:06.189803 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:55:06.203718 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:55:06.931387 disk-uuid[560]: The operation has completed successfully. Oct 8 19:55:06.932942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:55:06.962765 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:55:06.962907 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:55:06.988686 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:55:06.992181 sh[599]: Success Oct 8 19:55:07.005494 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 19:55:07.041440 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:55:07.063350 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:55:07.070853 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:55:07.082342 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 19:55:07.082377 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:55:07.082388 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:55:07.083579 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:55:07.090128 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:55:07.094365 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:55:07.096037 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:55:07.103673 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:55:07.119742 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:55:07.133842 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:55:07.133938 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:55:07.133950 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:55:07.138530 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:55:07.149956 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:55:07.151943 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:55:07.163199 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:55:07.172056 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:55:07.230893 ignition[695]: Ignition 2.19.0 Oct 8 19:55:07.230904 ignition[695]: Stage: fetch-offline Oct 8 19:55:07.230939 ignition[695]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:07.230957 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:07.231056 ignition[695]: parsed url from cmdline: "" Oct 8 19:55:07.231060 ignition[695]: no config URL provided Oct 8 19:55:07.231065 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:55:07.231074 ignition[695]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:55:07.231099 ignition[695]: op(1): [started] loading QEMU firmware config module Oct 8 19:55:07.231104 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:55:07.241793 ignition[695]: op(1): [finished] loading QEMU firmware config module Oct 8 19:55:07.256612 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:55:07.265597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:55:07.292623 ignition[695]: parsing config with SHA512: f94bd71460f591efd6454a9fcd4f65b481587cf0f9c9568a559d3a7c49455a197f8a02bc26da0279dc2ed07964f8f2403d69b33c07c693b04afc2824aa064b03 Oct 8 19:55:07.295783 systemd-networkd[788]: lo: Link UP Oct 8 19:55:07.295794 systemd-networkd[788]: lo: Gained carrier Oct 8 19:55:07.299987 ignition[695]: fetch-offline: fetch-offline passed Oct 8 19:55:07.297329 systemd-networkd[788]: Enumeration completed Oct 8 19:55:07.300235 ignition[695]: Ignition finished successfully Oct 8 19:55:07.297529 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:55:07.297763 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:55:07.297767 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:55:07.298887 unknown[695]: fetched base config from "system" Oct 8 19:55:07.298895 unknown[695]: fetched user config from "qemu" Oct 8 19:55:07.303964 systemd-networkd[788]: eth0: Link UP Oct 8 19:55:07.303967 systemd-networkd[788]: eth0: Gained carrier Oct 8 19:55:07.303981 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:55:07.305053 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:55:07.307363 systemd[1]: Reached target network.target - Network. Oct 8 19:55:07.308641 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:55:07.317702 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:55:07.323549 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:55:07.331124 ignition[791]: Ignition 2.19.0 Oct 8 19:55:07.331136 ignition[791]: Stage: kargs Oct 8 19:55:07.331297 ignition[791]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:07.331309 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:07.347328 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:55:07.332083 ignition[791]: kargs: kargs passed Oct 8 19:55:07.332120 ignition[791]: Ignition finished successfully Oct 8 19:55:07.365685 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:55:07.377536 ignition[801]: Ignition 2.19.0 Oct 8 19:55:07.377555 ignition[801]: Stage: disks Oct 8 19:55:07.381128 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:55:07.377770 ignition[801]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:07.397238 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:55:07.377786 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:07.398767 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:55:07.378929 ignition[801]: disks: disks passed Oct 8 19:55:07.400917 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:55:07.378992 ignition[801]: Ignition finished successfully Oct 8 19:55:07.401936 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:55:07.402337 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:55:07.420753 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:55:07.432843 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:55:07.482596 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:55:07.496650 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:55:07.593499 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 19:55:07.593783 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:55:07.594625 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:55:07.607563 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:55:07.609599 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:55:07.611829 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:55:07.611882 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:55:07.621441 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (819) Oct 8 19:55:07.621493 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:55:07.621513 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:55:07.621528 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:55:07.611910 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:55:07.624174 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:55:07.626045 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:55:07.627105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:55:07.642809 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:55:07.676278 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:55:07.681384 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:55:07.686730 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:55:07.691459 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:55:07.786234 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:55:07.801694 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:55:07.805026 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:55:07.809508 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:55:07.830896 ignition[931]: INFO : Ignition 2.19.0 Oct 8 19:55:07.830896 ignition[931]: INFO : Stage: mount Oct 8 19:55:07.832834 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:07.832834 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:07.832834 ignition[931]: INFO : mount: mount passed Oct 8 19:55:07.832834 ignition[931]: INFO : Ignition finished successfully Oct 8 19:55:07.834670 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:55:07.844693 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:55:07.846034 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:55:08.081351 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:55:08.097623 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:55:08.105014 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (945) Oct 8 19:55:08.105043 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:55:08.105054 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:55:08.105882 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:55:08.109500 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:55:08.110289 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:55:08.130535 ignition[962]: INFO : Ignition 2.19.0 Oct 8 19:55:08.130535 ignition[962]: INFO : Stage: files Oct 8 19:55:08.132460 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:08.132460 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:08.132460 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:55:08.136213 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:55:08.136213 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:55:08.136213 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:55:08.136213 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:55:08.136213 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:55:08.136021 unknown[962]: wrote ssh authorized keys file for user: core Oct 8 19:55:08.144354 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:55:08.144354 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 19:55:08.185201 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:55:08.321470 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:55:08.321470 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:55:08.321470 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:55:08.327083 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:55:08.329033 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:55:08.330835 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:55:08.330835 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:55:08.330835 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:55:08.336818 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:55:08.336818 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:55:08.336818 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:55:08.336818 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:55:08.336818 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:55:08.336818 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:55:08.336818 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 8 19:55:08.854319 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:55:09.135660 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:55:09.135660 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:55:09.139524 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:55:09.139524 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:55:09.139524 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:55:09.139524 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:55:09.139524 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:55:09.139524 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:55:09.139524 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:55:09.139524 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:55:09.159835 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:55:09.164748 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:55:09.166554 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:55:09.166554 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:55:09.169402 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:55:09.170888 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:55:09.172743 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:55:09.174611 ignition[962]: INFO : files: files passed Oct 8 19:55:09.174611 ignition[962]: INFO : Ignition finished successfully Oct 8 19:55:09.177787 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:55:09.186626 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:55:09.189404 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:55:09.192508 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:55:09.192674 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:55:09.199237 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:55:09.202172 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:55:09.203865 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:55:09.206810 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:55:09.204907 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:55:09.207608 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:55:09.225678 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:55:09.249656 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:55:09.249789 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:55:09.250507 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:55:09.253864 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:55:09.254289 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:55:09.258924 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:55:09.281434 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:55:09.291749 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:55:09.303616 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:55:09.304180 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:55:09.304569 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:55:09.305047 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:55:09.305204 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:55:09.312104 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:55:09.314508 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:55:09.315056 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:55:09.315417 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:55:09.315930 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:55:09.316289 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:55:09.316816 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:55:09.317180 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:55:09.317539 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:55:09.318128 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:55:09.318468 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:55:09.318593 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:55:09.319362 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:55:09.319881 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:55:09.320192 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:55:09.320309 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:55:09.336705 systemd-networkd[788]: eth0: Gained IPv6LL Oct 8 19:55:09.340794 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:55:09.340938 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:55:09.343380 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:55:09.343514 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:55:09.345939 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:55:09.348050 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:55:09.353534 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:55:09.356369 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:55:09.358258 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:55:09.358794 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:55:09.358894 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:55:09.360779 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:55:09.360868 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:55:09.362247 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:55:09.362382 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:55:09.364339 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:55:09.364443 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:55:09.374617 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:55:09.374874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:55:09.374995 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:55:09.377713 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:55:09.379060 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:55:09.379172 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:55:09.381115 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:55:09.381221 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:55:09.388259 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:55:09.388432 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:55:09.405942 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:55:09.415553 ignition[1017]: INFO : Ignition 2.19.0 Oct 8 19:55:09.415553 ignition[1017]: INFO : Stage: umount Oct 8 19:55:09.417514 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:09.417514 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:09.420645 ignition[1017]: INFO : umount: umount passed Oct 8 19:55:09.420645 ignition[1017]: INFO : Ignition finished successfully Oct 8 19:55:09.422846 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:55:09.423923 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:55:09.426512 systemd[1]: Stopped target network.target - Network. Oct 8 19:55:09.428253 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:55:09.428315 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:55:09.431288 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:55:09.431340 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:55:09.434275 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:55:09.435235 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:55:09.437159 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:55:09.437213 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:55:09.440377 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:55:09.442608 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:55:09.447519 systemd-networkd[788]: eth0: DHCPv6 lease lost Oct 8 19:55:09.448716 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:55:09.449772 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:55:09.453006 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:55:09.454061 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:55:09.457155 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:55:09.458289 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:55:09.469645 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:55:09.469943 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:55:09.470010 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:55:09.472071 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:55:09.472123 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:55:09.472370 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:55:09.472413 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:55:09.472885 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:55:09.472938 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:55:09.473316 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:55:09.486114 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:55:09.486276 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:55:09.510439 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:55:09.510692 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:55:09.514150 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:55:09.514207 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:55:09.514827 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:55:09.514878 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:55:09.515131 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:55:09.515193 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:55:09.515946 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:55:09.516009 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:55:09.516777 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:55:09.516841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:55:09.539686 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:55:09.541894 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:55:09.541989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:55:09.542416 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:55:09.542497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:09.549402 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:55:09.549575 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:55:09.619060 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:55:09.619202 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:55:09.621241 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:55:09.622954 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:55:09.623008 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:55:09.632624 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:55:09.640339 systemd[1]: Switching root. Oct 8 19:55:09.668097 systemd-journald[193]: Journal stopped Oct 8 19:55:10.846431 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 8 19:55:10.846529 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:55:10.846549 kernel: SELinux: policy capability open_perms=1 Oct 8 19:55:10.846563 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:55:10.846583 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:55:10.846599 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:55:10.846614 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:55:10.846634 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:55:10.846650 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:55:10.846665 kernel: audit: type=1403 audit(1728417310.128:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:55:10.846688 systemd[1]: Successfully loaded SELinux policy in 51.341ms. Oct 8 19:55:10.846725 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.599ms. Oct 8 19:55:10.846743 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:55:10.846762 systemd[1]: Detected virtualization kvm. Oct 8 19:55:10.846784 systemd[1]: Detected architecture x86-64. Oct 8 19:55:10.846800 systemd[1]: Detected first boot. Oct 8 19:55:10.846815 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:55:10.846831 zram_generator::config[1062]: No configuration found. Oct 8 19:55:10.846849 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:55:10.846867 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:55:10.846893 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:55:10.846915 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:55:10.846933 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:55:10.846950 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:55:10.846967 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:55:10.846984 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:55:10.847001 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:55:10.847018 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:55:10.847033 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:55:10.847049 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:55:10.847069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:55:10.847086 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:55:10.847102 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:55:10.847118 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:55:10.847135 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:55:10.847153 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:55:10.847168 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:55:10.847186 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:55:10.847203 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:55:10.847224 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:55:10.847244 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:55:10.847260 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:55:10.847277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:55:10.847294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:55:10.847310 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:55:10.847325 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:55:10.847342 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:55:10.847361 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:55:10.847376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:55:10.847392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:55:10.847406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:55:10.847421 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:55:10.847436 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:55:10.847453 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:55:10.847469 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:55:10.847926 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:55:10.847949 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:55:10.847963 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:55:10.847977 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:55:10.847992 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:55:10.848007 systemd[1]: Reached target machines.target - Containers. Oct 8 19:55:10.848021 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:55:10.848037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:55:10.848053 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:55:10.848071 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:55:10.848085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:55:10.848099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:55:10.848114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:55:10.848128 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:55:10.848142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:55:10.848156 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:55:10.848171 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:55:10.848187 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:55:10.848202 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:55:10.848216 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:55:10.848230 kernel: fuse: init (API version 7.39) Oct 8 19:55:10.848244 kernel: loop: module loaded Oct 8 19:55:10.848263 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:55:10.848279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:55:10.848294 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:55:10.848310 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:55:10.848327 kernel: ACPI: bus type drm_connector registered Oct 8 19:55:10.848342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:55:10.848380 systemd-journald[1129]: Collecting audit messages is disabled. Oct 8 19:55:10.848418 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:55:10.848433 systemd[1]: Stopped verity-setup.service. Oct 8 19:55:10.848448 systemd-journald[1129]: Journal started Oct 8 19:55:10.848494 systemd-journald[1129]: Runtime Journal (/run/log/journal/0fae61f4c7cd4dccb4a08db584f32571) is 6.0M, max 48.3M, 42.2M free. Oct 8 19:55:10.630785 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:55:10.650465 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:55:10.650932 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:55:10.851592 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:55:10.856021 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:55:10.857070 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:55:10.858651 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:55:10.860188 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:55:10.861625 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:55:10.863089 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:55:10.864652 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:55:10.866257 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:55:10.868076 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:55:10.869911 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:55:10.870125 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:55:10.872181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:55:10.872412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:55:10.874179 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:55:10.874403 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:55:10.876037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:55:10.876237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:55:10.878111 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:55:10.878337 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:55:10.880059 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:55:10.880276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:55:10.881727 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:55:10.883356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:55:10.884939 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:55:10.900840 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:55:10.911566 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:55:10.913900 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:55:10.915063 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:55:10.915088 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:55:10.917133 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:55:10.919488 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:55:10.921706 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:55:10.922913 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:55:10.925430 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:55:10.927824 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:55:10.929124 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:55:10.933201 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:55:10.934530 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:55:10.939311 systemd-journald[1129]: Time spent on flushing to /var/log/journal/0fae61f4c7cd4dccb4a08db584f32571 is 77.728ms for 991 entries. Oct 8 19:55:10.939311 systemd-journald[1129]: System Journal (/var/log/journal/0fae61f4c7cd4dccb4a08db584f32571) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:55:11.024912 systemd-journald[1129]: Received client request to flush runtime journal. Oct 8 19:55:10.939597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:55:11.013320 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:55:11.016975 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:55:11.021773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:55:11.023588 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:55:11.025181 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:55:11.027471 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:55:11.032531 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:55:11.049568 kernel: loop0: detected capacity change from 0 to 140768 Oct 8 19:55:11.052890 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:55:11.054769 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:55:11.058983 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:55:11.064794 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:55:11.066933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:55:11.070158 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:55:11.131967 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:55:11.131051 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:55:11.141966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:55:11.144828 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:55:11.146559 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:55:11.161130 kernel: loop1: detected capacity change from 0 to 211296 Oct 8 19:55:11.170351 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Oct 8 19:55:11.170787 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Oct 8 19:55:11.180440 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:55:11.208655 kernel: loop2: detected capacity change from 0 to 142488 Oct 8 19:55:11.289629 kernel: loop3: detected capacity change from 0 to 140768 Oct 8 19:55:11.308518 kernel: loop4: detected capacity change from 0 to 211296 Oct 8 19:55:11.315508 kernel: loop5: detected capacity change from 0 to 142488 Oct 8 19:55:11.324807 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:55:11.325383 (sd-merge)[1200]: Merged extensions into '/usr'. Oct 8 19:55:11.329846 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:55:11.329871 systemd[1]: Reloading... Oct 8 19:55:11.404590 zram_generator::config[1226]: No configuration found. Oct 8 19:55:11.506222 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:55:11.571541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:55:11.632375 systemd[1]: Reloading finished in 302 ms. Oct 8 19:55:11.695457 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:55:11.697101 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:55:11.708772 systemd[1]: Starting ensure-sysext.service... Oct 8 19:55:11.711013 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:55:11.717675 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:55:11.717696 systemd[1]: Reloading... Oct 8 19:55:11.738873 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:55:11.739252 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:55:11.740257 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:55:11.740570 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Oct 8 19:55:11.740653 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Oct 8 19:55:11.743936 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:55:11.743948 systemd-tmpfiles[1264]: Skipping /boot Oct 8 19:55:11.759556 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:55:11.759571 systemd-tmpfiles[1264]: Skipping /boot Oct 8 19:55:11.780545 zram_generator::config[1291]: No configuration found. Oct 8 19:55:11.905357 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:55:11.963304 systemd[1]: Reloading finished in 245 ms. Oct 8 19:55:12.044530 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:55:12.066803 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:55:12.113694 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:55:12.117792 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:55:12.124880 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:55:12.127896 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:55:12.134006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:55:12.134168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:55:12.138710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:55:12.143722 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:55:12.150558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:55:12.151736 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:55:12.151857 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:55:12.152825 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:55:12.154526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:55:12.155725 augenrules[1352]: No rules Oct 8 19:55:12.154696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:55:12.156581 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:55:12.158355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:55:12.158539 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:55:12.160498 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:55:12.160670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:55:12.162767 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:55:12.173019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:55:12.173252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:55:12.179721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:55:12.182189 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:55:12.184423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:55:12.185549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:55:12.187255 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:55:12.190188 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:55:12.193644 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:55:12.194971 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:55:12.196824 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:55:12.198981 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:55:12.201408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:55:12.201713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:55:12.203968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:55:12.204206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:55:12.206681 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:55:12.206908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:55:12.209021 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:55:12.222450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:55:12.222673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:55:12.222697 systemd-udevd[1366]: Using default interface naming scheme 'v255'. Oct 8 19:55:12.234300 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:55:12.237088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:55:12.241582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:55:12.244228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:55:12.246624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:55:12.246802 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:55:12.246952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:55:12.248047 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:55:12.251166 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:55:12.257359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:55:12.257645 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:55:12.260997 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:55:12.261237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:55:12.263247 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:55:12.264617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:55:12.274134 systemd[1]: Finished ensure-sysext.service. Oct 8 19:55:12.280403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:55:12.281340 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:55:12.293516 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1393) Oct 8 19:55:12.293580 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1394) Oct 8 19:55:12.304649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:55:12.305726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:55:12.305800 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:55:12.307870 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:55:12.408310 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1394) Oct 8 19:55:12.399100 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 19:55:12.418467 systemd-resolved[1343]: Positive Trust Anchors: Oct 8 19:55:12.418822 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:55:12.418924 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:55:12.424161 systemd-resolved[1343]: Defaulting to hostname 'linux'. Oct 8 19:55:12.426199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:55:12.428018 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:55:12.442510 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 8 19:55:12.442574 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 8 19:55:12.451958 kernel: ACPI: button: Power Button [PWRF] Oct 8 19:55:12.452058 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 19:55:12.453914 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 19:55:12.454107 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 19:55:12.445097 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:55:12.461686 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:55:12.466517 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 8 19:55:12.479817 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:55:12.481891 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:55:12.488011 systemd-networkd[1408]: lo: Link UP Oct 8 19:55:12.488023 systemd-networkd[1408]: lo: Gained carrier Oct 8 19:55:12.489213 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:55:12.491071 systemd-networkd[1408]: Enumeration completed Oct 8 19:55:12.491547 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:55:12.491610 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:55:12.492979 systemd-networkd[1408]: eth0: Link UP Oct 8 19:55:12.493031 systemd-networkd[1408]: eth0: Gained carrier Oct 8 19:55:12.493078 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:55:12.494695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:12.495985 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:55:12.497583 systemd[1]: Reached target network.target - Network. Oct 8 19:55:12.504987 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:55:12.507584 systemd-networkd[1408]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:55:12.508450 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Oct 8 19:55:13.407694 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:55:13.407744 systemd-timesyncd[1409]: Initial clock synchronization to Tue 2024-10-08 19:55:13.407596 UTC. Oct 8 19:55:13.407776 systemd-resolved[1343]: Clock change detected. Flushing caches. Oct 8 19:55:13.414102 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:55:13.459662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:13.497294 kernel: kvm_amd: TSC scaling supported Oct 8 19:55:13.497333 kernel: kvm_amd: Nested Virtualization enabled Oct 8 19:55:13.497347 kernel: kvm_amd: Nested Paging enabled Oct 8 19:55:13.498247 kernel: kvm_amd: LBR virtualization supported Oct 8 19:55:13.498276 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 8 19:55:13.499220 kernel: kvm_amd: Virtual GIF supported Oct 8 19:55:13.518055 kernel: EDAC MC: Ver: 3.0.0 Oct 8 19:55:13.552545 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:55:13.565236 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:55:13.574619 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:55:13.781992 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:55:13.783548 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:55:13.784691 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:55:13.785847 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:55:13.787085 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:55:13.788555 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:55:13.789762 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:55:13.790995 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:55:13.792214 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:55:13.792250 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:55:13.793190 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:55:13.794653 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:55:13.797324 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:55:13.814448 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:55:13.816696 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:55:13.818223 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:55:13.819342 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:55:13.820299 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:55:13.821235 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:55:13.821267 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:55:13.822200 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:55:13.824226 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:55:13.829104 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:55:13.829440 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:55:13.833271 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:55:13.834319 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:55:13.838332 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:55:13.839017 jq[1443]: false Oct 8 19:55:13.844203 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:55:13.850696 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:55:13.854689 extend-filesystems[1444]: Found loop3 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found loop4 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found loop5 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found sr0 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found vda Oct 8 19:55:13.856706 extend-filesystems[1444]: Found vda1 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found vda2 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found vda3 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found usr Oct 8 19:55:13.856706 extend-filesystems[1444]: Found vda4 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found vda6 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found vda7 Oct 8 19:55:13.856706 extend-filesystems[1444]: Found vda9 Oct 8 19:55:13.856706 extend-filesystems[1444]: Checking size of /dev/vda9 Oct 8 19:55:13.855708 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:55:13.865044 dbus-daemon[1442]: [system] SELinux support is enabled Oct 8 19:55:13.879007 extend-filesystems[1444]: Resized partition /dev/vda9 Oct 8 19:55:13.862854 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:55:13.864393 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:55:13.864950 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:55:13.866288 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:55:13.868905 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:55:13.873202 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:55:13.877907 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:55:13.882806 jq[1459]: true Oct 8 19:55:13.886161 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1393) Oct 8 19:55:13.885507 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:55:13.885729 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:55:13.886066 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:55:13.887479 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:55:13.889223 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:55:13.896481 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:55:13.903197 update_engine[1458]: I20241008 19:55:13.902812 1458 main.cc:92] Flatcar Update Engine starting Oct 8 19:55:13.912280 update_engine[1458]: I20241008 19:55:13.906656 1458 update_check_scheduler.cc:74] Next update check in 10m59s Oct 8 19:55:13.912662 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:55:13.912896 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:55:13.923459 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:55:13.926735 jq[1469]: true Oct 8 19:55:13.936105 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:55:13.942273 tar[1467]: linux-amd64/helm Oct 8 19:55:13.949134 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:55:13.954020 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:55:13.954051 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:55:13.955753 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:55:13.955773 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:55:13.961147 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:55:13.961147 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:55:13.961147 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:55:13.959376 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 19:55:13.987349 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:55:13.987452 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Oct 8 19:55:13.959398 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 19:55:13.964138 systemd-logind[1457]: New seat seat0. Oct 8 19:55:13.966445 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:55:13.972745 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:55:13.973690 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:55:13.975347 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:55:13.978870 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:55:13.993366 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:55:14.012475 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:55:14.109578 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:55:14.135255 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:55:14.139523 containerd[1470]: time="2024-10-08T19:55:14.139449269Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:55:14.152367 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:55:14.160934 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:55:14.161167 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:55:14.165075 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:55:14.166430 containerd[1470]: time="2024-10-08T19:55:14.166370295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.168147 containerd[1470]: time="2024-10-08T19:55:14.168026832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:55:14.168147 containerd[1470]: time="2024-10-08T19:55:14.168054313Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:55:14.168147 containerd[1470]: time="2024-10-08T19:55:14.168068630Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:55:14.168301 containerd[1470]: time="2024-10-08T19:55:14.168275779Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:55:14.168328 containerd[1470]: time="2024-10-08T19:55:14.168305635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.168475 containerd[1470]: time="2024-10-08T19:55:14.168387679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:55:14.168475 containerd[1470]: time="2024-10-08T19:55:14.168412255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.168664 containerd[1470]: time="2024-10-08T19:55:14.168643138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:55:14.168664 containerd[1470]: time="2024-10-08T19:55:14.168662815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.168716 containerd[1470]: time="2024-10-08T19:55:14.168676110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:55:14.168716 containerd[1470]: time="2024-10-08T19:55:14.168686369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.168797 containerd[1470]: time="2024-10-08T19:55:14.168780225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.169024 containerd[1470]: time="2024-10-08T19:55:14.169004716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.169159 containerd[1470]: time="2024-10-08T19:55:14.169139258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:55:14.169159 containerd[1470]: time="2024-10-08T19:55:14.169156180Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:55:14.169315 containerd[1470]: time="2024-10-08T19:55:14.169294379Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:55:14.169371 containerd[1470]: time="2024-10-08T19:55:14.169356616Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:55:14.175591 containerd[1470]: time="2024-10-08T19:55:14.175558166Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:55:14.175626 containerd[1470]: time="2024-10-08T19:55:14.175599283Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:55:14.175626 containerd[1470]: time="2024-10-08T19:55:14.175614682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:55:14.175664 containerd[1470]: time="2024-10-08T19:55:14.175639699Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:55:14.175664 containerd[1470]: time="2024-10-08T19:55:14.175656961Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:55:14.175870 containerd[1470]: time="2024-10-08T19:55:14.175846587Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:55:14.177020 containerd[1470]: time="2024-10-08T19:55:14.176987356Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:55:14.177166 containerd[1470]: time="2024-10-08T19:55:14.177141355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:55:14.177189 containerd[1470]: time="2024-10-08T19:55:14.177164268Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:55:14.177189 containerd[1470]: time="2024-10-08T19:55:14.177178955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:55:14.177256 containerd[1470]: time="2024-10-08T19:55:14.177193543Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.177256 containerd[1470]: time="2024-10-08T19:55:14.177206838Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.177256 containerd[1470]: time="2024-10-08T19:55:14.177218890Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.177256 containerd[1470]: time="2024-10-08T19:55:14.177236363Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.177256 containerd[1470]: time="2024-10-08T19:55:14.177250750Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177264566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177278482Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177293721Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177318928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177334417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177348824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177363972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177385863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177402114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177425047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177447329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177462708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177479188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177507 containerd[1470]: time="2024-10-08T19:55:14.177493475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177506109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177519133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177534953Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177558818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177574998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177587001Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177625483Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177639589Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177650430Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177662322Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177671970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177684433Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177694923Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:55:14.177775 containerd[1470]: time="2024-10-08T19:55:14.177706545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.178018 containerd[1470]: time="2024-10-08T19:55:14.177970410Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:55:14.178158 containerd[1470]: time="2024-10-08T19:55:14.178024521Z" level=info msg="Connect containerd service" Oct 8 19:55:14.178158 containerd[1470]: time="2024-10-08T19:55:14.178066079Z" level=info msg="using legacy CRI server" Oct 8 19:55:14.178158 containerd[1470]: time="2024-10-08T19:55:14.178073583Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:55:14.178223 containerd[1470]: time="2024-10-08T19:55:14.178175404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:55:14.178769 containerd[1470]: time="2024-10-08T19:55:14.178735835Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:55:14.178995 containerd[1470]: time="2024-10-08T19:55:14.178937193Z" level=info msg="Start subscribing containerd event" Oct 8 19:55:14.182167 containerd[1470]: time="2024-10-08T19:55:14.182139849Z" level=info msg="Start recovering state" Oct 8 19:55:14.182994 containerd[1470]: time="2024-10-08T19:55:14.182230028Z" level=info msg="Start event monitor" Oct 8 19:55:14.182994 containerd[1470]: time="2024-10-08T19:55:14.179152246Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:55:14.182994 containerd[1470]: time="2024-10-08T19:55:14.182259944Z" level=info msg="Start snapshots syncer" Oct 8 19:55:14.182994 containerd[1470]: time="2024-10-08T19:55:14.182382013Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:55:14.182994 containerd[1470]: time="2024-10-08T19:55:14.182395268Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:55:14.182994 containerd[1470]: time="2024-10-08T19:55:14.182405517Z" level=info msg="Start streaming server" Oct 8 19:55:14.182994 containerd[1470]: time="2024-10-08T19:55:14.182485016Z" level=info msg="containerd successfully booted in 0.045052s" Oct 8 19:55:14.182545 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:55:14.189394 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:55:14.206366 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:55:14.208557 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:55:14.209823 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:55:14.325296 tar[1467]: linux-amd64/LICENSE Oct 8 19:55:14.325428 tar[1467]: linux-amd64/README.md Oct 8 19:55:14.340796 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:55:15.414318 systemd-networkd[1408]: eth0: Gained IPv6LL Oct 8 19:55:15.417825 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:55:15.419797 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:55:15.431290 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:55:15.433813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:15.436064 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:55:15.454813 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:55:15.455146 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:55:15.456923 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:55:15.459758 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:55:16.044155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:16.045875 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:55:16.047191 systemd[1]: Startup finished in 766ms (kernel) + 5.406s (initrd) + 5.075s (userspace) = 11.248s. Oct 8 19:55:16.051951 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:55:16.584299 kubelet[1554]: E1008 19:55:16.584186 1554 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:55:16.589651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:55:16.589915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:55:16.590358 systemd[1]: kubelet.service: Consumed 1.029s CPU time. Oct 8 19:55:23.770870 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:55:23.772331 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:43006.service - OpenSSH per-connection server daemon (10.0.0.1:43006). Oct 8 19:55:23.820069 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 43006 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:55:23.822497 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:55:23.834254 systemd-logind[1457]: New session 1 of user core. Oct 8 19:55:23.836193 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:55:23.848413 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:55:23.865270 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:55:23.885526 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:55:23.888835 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:55:24.015173 systemd[1573]: Queued start job for default target default.target. Oct 8 19:55:24.027464 systemd[1573]: Created slice app.slice - User Application Slice. Oct 8 19:55:24.027494 systemd[1573]: Reached target paths.target - Paths. Oct 8 19:55:24.027508 systemd[1573]: Reached target timers.target - Timers. Oct 8 19:55:24.029187 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:55:24.041739 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:55:24.041869 systemd[1573]: Reached target sockets.target - Sockets. Oct 8 19:55:24.041883 systemd[1573]: Reached target basic.target - Basic System. Oct 8 19:55:24.041919 systemd[1573]: Reached target default.target - Main User Target. Oct 8 19:55:24.041952 systemd[1573]: Startup finished in 145ms. Oct 8 19:55:24.042496 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:55:24.044280 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:55:24.104278 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:43016.service - OpenSSH per-connection server daemon (10.0.0.1:43016). Oct 8 19:55:24.140020 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 43016 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:55:24.141727 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:55:24.145813 systemd-logind[1457]: New session 2 of user core. Oct 8 19:55:24.155318 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:55:24.209350 sshd[1584]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:24.216607 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:43016.service: Deactivated successfully. Oct 8 19:55:24.218146 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:55:24.219793 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:55:24.226471 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:43032.service - OpenSSH per-connection server daemon (10.0.0.1:43032). Oct 8 19:55:24.227229 systemd-logind[1457]: Removed session 2. Oct 8 19:55:24.256968 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 43032 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:55:24.258492 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:55:24.262286 systemd-logind[1457]: New session 3 of user core. Oct 8 19:55:24.272199 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:55:24.321005 sshd[1591]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:24.330780 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:43032.service: Deactivated successfully. Oct 8 19:55:24.332526 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:55:24.334068 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:55:24.335307 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:43092.service - OpenSSH per-connection server daemon (10.0.0.1:43092). Oct 8 19:55:24.336002 systemd-logind[1457]: Removed session 3. Oct 8 19:55:24.379984 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 43092 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:55:24.381513 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:55:24.385168 systemd-logind[1457]: New session 4 of user core. Oct 8 19:55:24.395236 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:55:24.449909 sshd[1599]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:24.456927 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:43092.service: Deactivated successfully. Oct 8 19:55:24.458790 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:55:24.460536 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:55:24.471519 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:43104.service - OpenSSH per-connection server daemon (10.0.0.1:43104). Oct 8 19:55:24.472544 systemd-logind[1457]: Removed session 4. Oct 8 19:55:24.503008 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 43104 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:55:24.504728 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:55:24.509205 systemd-logind[1457]: New session 5 of user core. Oct 8 19:55:24.519206 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:55:24.609523 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:55:24.609984 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:55:24.632954 sudo[1610]: pam_unix(sudo:session): session closed for user root Oct 8 19:55:24.634965 sshd[1606]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:24.651844 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:43104.service: Deactivated successfully. Oct 8 19:55:24.653494 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:55:24.655158 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:55:24.662350 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:43110.service - OpenSSH per-connection server daemon (10.0.0.1:43110). Oct 8 19:55:24.663205 systemd-logind[1457]: Removed session 5. Oct 8 19:55:24.692729 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 43110 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:55:24.694849 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:55:24.698724 systemd-logind[1457]: New session 6 of user core. Oct 8 19:55:24.708340 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:55:24.761986 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:55:24.762353 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:55:24.766306 sudo[1619]: pam_unix(sudo:session): session closed for user root Oct 8 19:55:24.772484 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:55:24.772820 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:55:24.800502 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:55:24.802976 auditctl[1622]: No rules Oct 8 19:55:24.804501 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:55:24.804806 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:55:24.806878 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:55:24.854910 augenrules[1640]: No rules Oct 8 19:55:24.856754 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:55:24.858342 sudo[1618]: pam_unix(sudo:session): session closed for user root Oct 8 19:55:24.860272 sshd[1615]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:24.871688 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:43110.service: Deactivated successfully. Oct 8 19:55:24.874406 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:55:24.876726 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:55:24.887514 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:43122.service - OpenSSH per-connection server daemon (10.0.0.1:43122). Oct 8 19:55:24.888656 systemd-logind[1457]: Removed session 6. Oct 8 19:55:24.919959 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 43122 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:55:24.921725 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:55:24.926840 systemd-logind[1457]: New session 7 of user core. Oct 8 19:55:24.942289 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:55:24.996031 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:55:24.996459 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:55:25.284392 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:55:25.284533 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:55:25.789931 dockerd[1668]: time="2024-10-08T19:55:25.789848696Z" level=info msg="Starting up" Oct 8 19:55:26.006258 dockerd[1668]: time="2024-10-08T19:55:26.006208348Z" level=info msg="Loading containers: start." Oct 8 19:55:26.166129 kernel: Initializing XFRM netlink socket Oct 8 19:55:26.253208 systemd-networkd[1408]: docker0: Link UP Oct 8 19:55:26.283954 dockerd[1668]: time="2024-10-08T19:55:26.283917402Z" level=info msg="Loading containers: done." Oct 8 19:55:26.355972 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1968413314-merged.mount: Deactivated successfully. Oct 8 19:55:26.357781 dockerd[1668]: time="2024-10-08T19:55:26.357739414Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:55:26.357891 dockerd[1668]: time="2024-10-08T19:55:26.357867234Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:55:26.358023 dockerd[1668]: time="2024-10-08T19:55:26.357991106Z" level=info msg="Daemon has completed initialization" Oct 8 19:55:26.398165 dockerd[1668]: time="2024-10-08T19:55:26.398044831Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:55:26.398637 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:55:26.840351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:55:26.848302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:27.005548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:27.010801 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:55:27.067935 kubelet[1824]: E1008 19:55:27.067854 1824 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:55:27.076387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:55:27.076645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:55:27.439143 containerd[1470]: time="2024-10-08T19:55:27.439080655Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:55:28.410642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027754367.mount: Deactivated successfully. Oct 8 19:55:30.053162 containerd[1470]: time="2024-10-08T19:55:30.053104823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:30.054015 containerd[1470]: time="2024-10-08T19:55:30.053977129Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 8 19:55:30.055372 containerd[1470]: time="2024-10-08T19:55:30.055342770Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:30.058568 containerd[1470]: time="2024-10-08T19:55:30.058531370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:30.059531 containerd[1470]: time="2024-10-08T19:55:30.059501159Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.620353769s" Oct 8 19:55:30.059588 containerd[1470]: time="2024-10-08T19:55:30.059534251Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 8 19:55:30.091396 containerd[1470]: time="2024-10-08T19:55:30.091345879Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:55:32.445462 containerd[1470]: time="2024-10-08T19:55:32.445382027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:32.446392 containerd[1470]: time="2024-10-08T19:55:32.446330726Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 8 19:55:32.447778 containerd[1470]: time="2024-10-08T19:55:32.447740580Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:32.451400 containerd[1470]: time="2024-10-08T19:55:32.451337926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:32.452569 containerd[1470]: time="2024-10-08T19:55:32.452523189Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.361130452s" Oct 8 19:55:32.452569 containerd[1470]: time="2024-10-08T19:55:32.452566050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 8 19:55:32.486360 containerd[1470]: time="2024-10-08T19:55:32.486309130Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:55:34.144846 containerd[1470]: time="2024-10-08T19:55:34.144779714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:34.146182 containerd[1470]: time="2024-10-08T19:55:34.146108436Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 8 19:55:34.147357 containerd[1470]: time="2024-10-08T19:55:34.147325899Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:34.151106 containerd[1470]: time="2024-10-08T19:55:34.149955170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:34.152557 containerd[1470]: time="2024-10-08T19:55:34.152516554Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.666170465s" Oct 8 19:55:34.152557 containerd[1470]: time="2024-10-08T19:55:34.152547291Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 8 19:55:34.176257 containerd[1470]: time="2024-10-08T19:55:34.176215157Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:55:35.180803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490380148.mount: Deactivated successfully. Oct 8 19:55:35.915446 containerd[1470]: time="2024-10-08T19:55:35.915389089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:35.916220 containerd[1470]: time="2024-10-08T19:55:35.916183389Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 8 19:55:35.917446 containerd[1470]: time="2024-10-08T19:55:35.917396984Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:35.919793 containerd[1470]: time="2024-10-08T19:55:35.919746160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:35.920442 containerd[1470]: time="2024-10-08T19:55:35.920392352Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.744134865s" Oct 8 19:55:35.920442 containerd[1470]: time="2024-10-08T19:55:35.920429201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 8 19:55:35.945208 containerd[1470]: time="2024-10-08T19:55:35.945166913Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:55:36.492072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649649205.mount: Deactivated successfully. Oct 8 19:55:37.165923 containerd[1470]: time="2024-10-08T19:55:37.165866370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:37.166683 containerd[1470]: time="2024-10-08T19:55:37.166612489Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 8 19:55:37.167983 containerd[1470]: time="2024-10-08T19:55:37.167935691Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:37.170713 containerd[1470]: time="2024-10-08T19:55:37.170675018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:37.171802 containerd[1470]: time="2024-10-08T19:55:37.171757678Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.226371183s" Oct 8 19:55:37.171859 containerd[1470]: time="2024-10-08T19:55:37.171803244Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 19:55:37.194405 containerd[1470]: time="2024-10-08T19:55:37.194363252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:55:37.326816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:55:37.341268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:37.485240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:37.490741 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:55:37.534236 kubelet[1996]: E1008 19:55:37.534158 1996 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:55:37.539331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:55:37.539592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:55:38.050346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866395454.mount: Deactivated successfully. Oct 8 19:55:38.055879 containerd[1470]: time="2024-10-08T19:55:38.055822075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:38.056619 containerd[1470]: time="2024-10-08T19:55:38.056565068Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 8 19:55:38.057745 containerd[1470]: time="2024-10-08T19:55:38.057693494Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:38.059800 containerd[1470]: time="2024-10-08T19:55:38.059760130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:38.060444 containerd[1470]: time="2024-10-08T19:55:38.060415279Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 866.016961ms" Oct 8 19:55:38.060444 containerd[1470]: time="2024-10-08T19:55:38.060441818Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 19:55:38.082021 containerd[1470]: time="2024-10-08T19:55:38.081983857Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:55:38.695586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445179617.mount: Deactivated successfully. Oct 8 19:55:42.342027 containerd[1470]: time="2024-10-08T19:55:42.341959322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:42.343129 containerd[1470]: time="2024-10-08T19:55:42.343049767Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 8 19:55:42.347831 containerd[1470]: time="2024-10-08T19:55:42.347744582Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:43.172117 containerd[1470]: time="2024-10-08T19:55:43.172049347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:43.173442 containerd[1470]: time="2024-10-08T19:55:43.173358072Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.091334249s" Oct 8 19:55:43.173492 containerd[1470]: time="2024-10-08T19:55:43.173451156Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 8 19:55:45.901690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:45.916348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:45.935590 systemd[1]: Reloading requested from client PID 2142 ('systemctl') (unit session-7.scope)... Oct 8 19:55:45.935607 systemd[1]: Reloading... Oct 8 19:55:46.022183 zram_generator::config[2181]: No configuration found. Oct 8 19:55:46.287445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:55:46.363440 systemd[1]: Reloading finished in 427 ms. Oct 8 19:55:46.421858 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:55:46.421971 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:55:46.422300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:46.425227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:46.581500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:46.587440 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:55:46.634469 kubelet[2230]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:55:46.634469 kubelet[2230]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:55:46.634469 kubelet[2230]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:55:46.635583 kubelet[2230]: I1008 19:55:46.635475 2230 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:55:46.943723 kubelet[2230]: I1008 19:55:46.943582 2230 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:55:46.943723 kubelet[2230]: I1008 19:55:46.943620 2230 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:55:46.943871 kubelet[2230]: I1008 19:55:46.943837 2230 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:55:46.962126 kubelet[2230]: E1008 19:55:46.962051 2230 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:46.964521 kubelet[2230]: I1008 19:55:46.964495 2230 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:55:46.980029 kubelet[2230]: I1008 19:55:46.979964 2230 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:55:46.981629 kubelet[2230]: I1008 19:55:46.981587 2230 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:55:46.981831 kubelet[2230]: I1008 19:55:46.981800 2230 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:55:46.981955 kubelet[2230]: I1008 19:55:46.981835 2230 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:55:46.981955 kubelet[2230]: I1008 19:55:46.981849 2230 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:55:46.982028 kubelet[2230]: I1008 19:55:46.981992 2230 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:55:46.982160 kubelet[2230]: I1008 19:55:46.982131 2230 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:55:46.982160 kubelet[2230]: I1008 19:55:46.982152 2230 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:55:46.982225 kubelet[2230]: I1008 19:55:46.982182 2230 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:55:46.982225 kubelet[2230]: I1008 19:55:46.982198 2230 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:55:46.983955 kubelet[2230]: I1008 19:55:46.983923 2230 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:55:46.984178 kubelet[2230]: W1008 19:55:46.984110 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:46.984213 kubelet[2230]: E1008 19:55:46.984200 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:46.984267 kubelet[2230]: W1008 19:55:46.983819 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:46.984297 kubelet[2230]: E1008 19:55:46.984276 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:46.986337 kubelet[2230]: I1008 19:55:46.986315 2230 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:55:46.986387 kubelet[2230]: W1008 19:55:46.986376 2230 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:55:46.987266 kubelet[2230]: I1008 19:55:46.986966 2230 server.go:1256] "Started kubelet" Oct 8 19:55:46.987266 kubelet[2230]: I1008 19:55:46.987079 2230 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:55:46.987266 kubelet[2230]: I1008 19:55:46.987185 2230 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:55:46.988327 kubelet[2230]: I1008 19:55:46.987508 2230 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:55:46.988327 kubelet[2230]: I1008 19:55:46.987946 2230 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:55:46.989362 kubelet[2230]: I1008 19:55:46.989204 2230 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:55:46.991051 kubelet[2230]: E1008 19:55:46.991035 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:46.991955 kubelet[2230]: I1008 19:55:46.991349 2230 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:55:46.991955 kubelet[2230]: I1008 19:55:46.991429 2230 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:55:46.991955 kubelet[2230]: I1008 19:55:46.991476 2230 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:55:46.991955 kubelet[2230]: W1008 19:55:46.991781 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:46.991955 kubelet[2230]: E1008 19:55:46.991815 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:46.991955 kubelet[2230]: E1008 19:55:46.991832 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Oct 8 19:55:46.992186 kubelet[2230]: E1008 19:55:46.992160 2230 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:55:46.992386 kubelet[2230]: I1008 19:55:46.992369 2230 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:55:46.992563 kubelet[2230]: E1008 19:55:46.992524 2230 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc926b66c687f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:55:46.986870769 +0000 UTC m=+0.394614343,LastTimestamp:2024-10-08 19:55:46.986870769 +0000 UTC m=+0.394614343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:55:46.993493 kubelet[2230]: I1008 19:55:46.993461 2230 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:55:46.993493 kubelet[2230]: I1008 19:55:46.993479 2230 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:55:47.008309 kubelet[2230]: I1008 19:55:47.008252 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:55:47.009725 kubelet[2230]: I1008 19:55:47.009685 2230 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:55:47.009725 kubelet[2230]: I1008 19:55:47.009704 2230 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:55:47.009725 kubelet[2230]: I1008 19:55:47.009722 2230 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:55:47.010458 kubelet[2230]: I1008 19:55:47.010037 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:55:47.010458 kubelet[2230]: I1008 19:55:47.010079 2230 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:55:47.010458 kubelet[2230]: I1008 19:55:47.010122 2230 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:55:47.010458 kubelet[2230]: E1008 19:55:47.010208 2230 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:55:47.011531 kubelet[2230]: W1008 19:55:47.011068 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:47.011531 kubelet[2230]: E1008 19:55:47.011135 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:47.093309 kubelet[2230]: I1008 19:55:47.093274 2230 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:47.093685 kubelet[2230]: E1008 19:55:47.093664 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Oct 8 19:55:47.110902 kubelet[2230]: E1008 19:55:47.110857 2230 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:55:47.192646 kubelet[2230]: E1008 19:55:47.192580 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Oct 8 19:55:47.295597 kubelet[2230]: I1008 19:55:47.295457 2230 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:47.295943 kubelet[2230]: E1008 19:55:47.295912 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Oct 8 19:55:47.312063 kubelet[2230]: E1008 19:55:47.311988 2230 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:55:47.574230 kubelet[2230]: I1008 19:55:47.574009 2230 policy_none.go:49] "None policy: Start" Oct 8 19:55:47.574998 kubelet[2230]: I1008 19:55:47.574954 2230 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:55:47.575137 kubelet[2230]: I1008 19:55:47.574994 2230 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:55:47.593868 kubelet[2230]: E1008 19:55:47.593817 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Oct 8 19:55:47.631245 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:55:47.650245 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:55:47.655517 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:55:47.663001 kubelet[2230]: I1008 19:55:47.662970 2230 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:55:47.663439 kubelet[2230]: I1008 19:55:47.663319 2230 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:55:47.664210 kubelet[2230]: E1008 19:55:47.664179 2230 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:55:47.697185 kubelet[2230]: I1008 19:55:47.697159 2230 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:47.697464 kubelet[2230]: E1008 19:55:47.697440 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Oct 8 19:55:47.712636 kubelet[2230]: I1008 19:55:47.712612 2230 topology_manager.go:215] "Topology Admit Handler" podUID="acbf5f7f51337423f1bae2d703422297" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:55:47.713383 kubelet[2230]: I1008 19:55:47.713363 2230 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:55:47.714063 kubelet[2230]: I1008 19:55:47.714045 2230 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:55:47.719972 systemd[1]: Created slice kubepods-burstable-podacbf5f7f51337423f1bae2d703422297.slice - libcontainer container kubepods-burstable-podacbf5f7f51337423f1bae2d703422297.slice. Oct 8 19:55:47.733844 systemd[1]: Created slice kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice - libcontainer container kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice. Oct 8 19:55:47.747693 systemd[1]: Created slice kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice - libcontainer container kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice. Oct 8 19:55:47.795387 kubelet[2230]: I1008 19:55:47.795361 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acbf5f7f51337423f1bae2d703422297-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"acbf5f7f51337423f1bae2d703422297\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:47.795506 kubelet[2230]: I1008 19:55:47.795397 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:47.795506 kubelet[2230]: I1008 19:55:47.795416 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:47.795506 kubelet[2230]: I1008 19:55:47.795433 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:47.795506 kubelet[2230]: I1008 19:55:47.795449 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acbf5f7f51337423f1bae2d703422297-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"acbf5f7f51337423f1bae2d703422297\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:47.795506 kubelet[2230]: I1008 19:55:47.795491 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acbf5f7f51337423f1bae2d703422297-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"acbf5f7f51337423f1bae2d703422297\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:47.795677 kubelet[2230]: I1008 19:55:47.795569 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:47.795677 kubelet[2230]: I1008 19:55:47.795594 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:47.795677 kubelet[2230]: I1008 19:55:47.795612 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:55:47.831084 kubelet[2230]: W1008 19:55:47.830951 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:47.831084 kubelet[2230]: E1008 19:55:47.831018 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:47.835363 kubelet[2230]: W1008 19:55:47.835331 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:47.835363 kubelet[2230]: E1008 19:55:47.835366 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:47.907243 kubelet[2230]: W1008 19:55:47.907176 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:47.907243 kubelet[2230]: E1008 19:55:47.907231 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:48.031977 kubelet[2230]: E1008 19:55:48.031921 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:48.032693 containerd[1470]: time="2024-10-08T19:55:48.032648747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:acbf5f7f51337423f1bae2d703422297,Namespace:kube-system,Attempt:0,}" Oct 8 19:55:48.045945 kubelet[2230]: E1008 19:55:48.045909 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:48.046423 containerd[1470]: time="2024-10-08T19:55:48.046387239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 8 19:55:48.049715 kubelet[2230]: E1008 19:55:48.049678 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:48.050119 containerd[1470]: time="2024-10-08T19:55:48.050074425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 8 19:55:48.395493 kubelet[2230]: E1008 19:55:48.395428 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Oct 8 19:55:48.499249 kubelet[2230]: I1008 19:55:48.499211 2230 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:48.499631 kubelet[2230]: E1008 19:55:48.499608 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Oct 8 19:55:48.520265 kubelet[2230]: W1008 19:55:48.520188 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:48.520265 kubelet[2230]: E1008 19:55:48.520252 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:49.018184 kubelet[2230]: E1008 19:55:49.018149 2230 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:49.595597 kubelet[2230]: W1008 19:55:49.595542 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:49.595597 kubelet[2230]: E1008 19:55:49.595596 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:49.996359 kubelet[2230]: E1008 19:55:49.996254 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="3.2s" Oct 8 19:55:50.101078 kubelet[2230]: I1008 19:55:50.101035 2230 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:50.101581 kubelet[2230]: E1008 19:55:50.101280 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Oct 8 19:55:50.224495 kubelet[2230]: W1008 19:55:50.224419 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:50.224495 kubelet[2230]: E1008 19:55:50.224460 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:50.902443 kubelet[2230]: W1008 19:55:50.902396 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:50.902443 kubelet[2230]: E1008 19:55:50.902445 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:50.910667 kubelet[2230]: W1008 19:55:50.910645 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:50.910708 kubelet[2230]: E1008 19:55:50.910674 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 8 19:55:51.278517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652684223.mount: Deactivated successfully. Oct 8 19:55:51.286196 containerd[1470]: time="2024-10-08T19:55:51.286115968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:51.288268 containerd[1470]: time="2024-10-08T19:55:51.288181691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:55:51.289269 containerd[1470]: time="2024-10-08T19:55:51.289231911Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:51.290160 containerd[1470]: time="2024-10-08T19:55:51.290127936Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:51.291149 containerd[1470]: time="2024-10-08T19:55:51.291120775Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:51.291996 containerd[1470]: time="2024-10-08T19:55:51.291856082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:55:51.292703 containerd[1470]: time="2024-10-08T19:55:51.292666944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 8 19:55:51.294588 containerd[1470]: time="2024-10-08T19:55:51.294546491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:51.296747 containerd[1470]: time="2024-10-08T19:55:51.296692919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.263956192s" Oct 8 19:55:51.297397 containerd[1470]: time="2024-10-08T19:55:51.297356969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.247193544s" Oct 8 19:55:51.300029 containerd[1470]: time="2024-10-08T19:55:51.299995408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.25355159s" Oct 8 19:55:51.438986 containerd[1470]: time="2024-10-08T19:55:51.438764157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:55:51.438986 containerd[1470]: time="2024-10-08T19:55:51.438815345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:55:51.438986 containerd[1470]: time="2024-10-08T19:55:51.438842717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:51.438986 containerd[1470]: time="2024-10-08T19:55:51.438918261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:51.439230 containerd[1470]: time="2024-10-08T19:55:51.438753987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:55:51.439230 containerd[1470]: time="2024-10-08T19:55:51.438803912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:55:51.439230 containerd[1470]: time="2024-10-08T19:55:51.438817449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:51.439230 containerd[1470]: time="2024-10-08T19:55:51.438884406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:51.441007 containerd[1470]: time="2024-10-08T19:55:51.440789282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:55:51.441007 containerd[1470]: time="2024-10-08T19:55:51.440848505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:55:51.441007 containerd[1470]: time="2024-10-08T19:55:51.440865317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:51.441007 containerd[1470]: time="2024-10-08T19:55:51.440945531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:51.469309 systemd[1]: Started cri-containerd-ff996e73186b0fb5937922b90f77680cc464abb97e267981aea8608c37449628.scope - libcontainer container ff996e73186b0fb5937922b90f77680cc464abb97e267981aea8608c37449628. Oct 8 19:55:51.474562 systemd[1]: Started cri-containerd-9e992cc0746427b7c99bfa771f8e79dbbc3f8500be296cd2650af8395616ba68.scope - libcontainer container 9e992cc0746427b7c99bfa771f8e79dbbc3f8500be296cd2650af8395616ba68. Oct 8 19:55:51.477040 systemd[1]: Started cri-containerd-b552fc21f8be138cd4e47c9162922ec2934ce9434a2dd8801097e6e257818c7d.scope - libcontainer container b552fc21f8be138cd4e47c9162922ec2934ce9434a2dd8801097e6e257818c7d. Oct 8 19:55:51.516084 containerd[1470]: time="2024-10-08T19:55:51.516040213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff996e73186b0fb5937922b90f77680cc464abb97e267981aea8608c37449628\"" Oct 8 19:55:51.517776 kubelet[2230]: E1008 19:55:51.517752 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:51.520686 containerd[1470]: time="2024-10-08T19:55:51.520638912Z" level=info msg="CreateContainer within sandbox \"ff996e73186b0fb5937922b90f77680cc464abb97e267981aea8608c37449628\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:55:51.522739 containerd[1470]: time="2024-10-08T19:55:51.522706138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:acbf5f7f51337423f1bae2d703422297,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e992cc0746427b7c99bfa771f8e79dbbc3f8500be296cd2650af8395616ba68\"" Oct 8 19:55:51.523719 kubelet[2230]: E1008 19:55:51.523697 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:51.527222 containerd[1470]: time="2024-10-08T19:55:51.527185089Z" level=info msg="CreateContainer within sandbox \"9e992cc0746427b7c99bfa771f8e79dbbc3f8500be296cd2650af8395616ba68\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:55:51.527306 containerd[1470]: time="2024-10-08T19:55:51.527276353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b552fc21f8be138cd4e47c9162922ec2934ce9434a2dd8801097e6e257818c7d\"" Oct 8 19:55:51.527914 kubelet[2230]: E1008 19:55:51.527882 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:51.529900 containerd[1470]: time="2024-10-08T19:55:51.529812737Z" level=info msg="CreateContainer within sandbox \"b552fc21f8be138cd4e47c9162922ec2934ce9434a2dd8801097e6e257818c7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:55:51.548624 containerd[1470]: time="2024-10-08T19:55:51.548545995Z" level=info msg="CreateContainer within sandbox \"ff996e73186b0fb5937922b90f77680cc464abb97e267981aea8608c37449628\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47eadff81c01ed792a998b0365e5b31340d383dfbd15c1def77cc32f68c025df\"" Oct 8 19:55:51.549375 containerd[1470]: time="2024-10-08T19:55:51.549330286Z" level=info msg="StartContainer for \"47eadff81c01ed792a998b0365e5b31340d383dfbd15c1def77cc32f68c025df\"" Oct 8 19:55:51.562048 containerd[1470]: time="2024-10-08T19:55:51.561909467Z" level=info msg="CreateContainer within sandbox \"b552fc21f8be138cd4e47c9162922ec2934ce9434a2dd8801097e6e257818c7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b0cf42a16ff2cefa47c9e5da5139f35b9b3fb639fee20cd981ac61a675de727\"" Oct 8 19:55:51.562755 containerd[1470]: time="2024-10-08T19:55:51.562707715Z" level=info msg="StartContainer for \"5b0cf42a16ff2cefa47c9e5da5139f35b9b3fb639fee20cd981ac61a675de727\"" Oct 8 19:55:51.571599 containerd[1470]: time="2024-10-08T19:55:51.571545478Z" level=info msg="CreateContainer within sandbox \"9e992cc0746427b7c99bfa771f8e79dbbc3f8500be296cd2650af8395616ba68\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"879439d14cf77127df3807cb6581e982fb5ecfaa50bf91a14e30b76b974bf5b5\"" Oct 8 19:55:51.572512 containerd[1470]: time="2024-10-08T19:55:51.572425120Z" level=info msg="StartContainer for \"879439d14cf77127df3807cb6581e982fb5ecfaa50bf91a14e30b76b974bf5b5\"" Oct 8 19:55:51.581327 systemd[1]: Started cri-containerd-47eadff81c01ed792a998b0365e5b31340d383dfbd15c1def77cc32f68c025df.scope - libcontainer container 47eadff81c01ed792a998b0365e5b31340d383dfbd15c1def77cc32f68c025df. Oct 8 19:55:51.597378 systemd[1]: Started cri-containerd-5b0cf42a16ff2cefa47c9e5da5139f35b9b3fb639fee20cd981ac61a675de727.scope - libcontainer container 5b0cf42a16ff2cefa47c9e5da5139f35b9b3fb639fee20cd981ac61a675de727. Oct 8 19:55:51.602106 systemd[1]: Started cri-containerd-879439d14cf77127df3807cb6581e982fb5ecfaa50bf91a14e30b76b974bf5b5.scope - libcontainer container 879439d14cf77127df3807cb6581e982fb5ecfaa50bf91a14e30b76b974bf5b5. Oct 8 19:55:51.651039 containerd[1470]: time="2024-10-08T19:55:51.650976266Z" level=info msg="StartContainer for \"47eadff81c01ed792a998b0365e5b31340d383dfbd15c1def77cc32f68c025df\" returns successfully" Oct 8 19:55:51.651039 containerd[1470]: time="2024-10-08T19:55:51.651046280Z" level=info msg="StartContainer for \"5b0cf42a16ff2cefa47c9e5da5139f35b9b3fb639fee20cd981ac61a675de727\" returns successfully" Oct 8 19:55:51.657286 containerd[1470]: time="2024-10-08T19:55:51.657239822Z" level=info msg="StartContainer for \"879439d14cf77127df3807cb6581e982fb5ecfaa50bf91a14e30b76b974bf5b5\" returns successfully" Oct 8 19:55:52.023466 kubelet[2230]: E1008 19:55:52.023442 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:52.026138 kubelet[2230]: E1008 19:55:52.025612 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:52.026465 kubelet[2230]: E1008 19:55:52.026454 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:53.028859 kubelet[2230]: E1008 19:55:53.028818 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:53.200560 kubelet[2230]: E1008 19:55:53.200512 2230 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:55:53.201189 kubelet[2230]: E1008 19:55:53.201159 2230 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 8 19:55:53.303253 kubelet[2230]: I1008 19:55:53.303142 2230 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:53.310107 kubelet[2230]: I1008 19:55:53.310060 2230 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:55:53.316472 kubelet[2230]: E1008 19:55:53.316424 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:53.416954 kubelet[2230]: E1008 19:55:53.416890 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:53.488862 kubelet[2230]: E1008 19:55:53.488828 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:53.517749 kubelet[2230]: E1008 19:55:53.517694 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:53.619277 kubelet[2230]: E1008 19:55:53.619125 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:53.719327 kubelet[2230]: E1008 19:55:53.719250 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:53.819940 kubelet[2230]: E1008 19:55:53.819892 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:53.920666 kubelet[2230]: E1008 19:55:53.920554 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.021063 kubelet[2230]: E1008 19:55:54.020996 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.122229 kubelet[2230]: E1008 19:55:54.122160 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.222934 kubelet[2230]: E1008 19:55:54.222756 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.293145 kubelet[2230]: E1008 19:55:54.293081 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:54.297438 kubelet[2230]: E1008 19:55:54.297399 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:54.323175 kubelet[2230]: E1008 19:55:54.323140 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.423779 kubelet[2230]: E1008 19:55:54.423719 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.524613 kubelet[2230]: E1008 19:55:54.524472 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.625201 kubelet[2230]: E1008 19:55:54.625152 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.726020 kubelet[2230]: E1008 19:55:54.725964 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.826717 kubelet[2230]: E1008 19:55:54.826570 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:54.927161 kubelet[2230]: E1008 19:55:54.927111 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.028210 kubelet[2230]: E1008 19:55:55.028147 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.129109 kubelet[2230]: E1008 19:55:55.128950 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.229627 kubelet[2230]: E1008 19:55:55.229546 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.330186 kubelet[2230]: E1008 19:55:55.330126 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.430756 kubelet[2230]: E1008 19:55:55.430705 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.531595 kubelet[2230]: E1008 19:55:55.531523 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.632221 kubelet[2230]: E1008 19:55:55.632160 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.733333 kubelet[2230]: E1008 19:55:55.733199 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.833824 kubelet[2230]: E1008 19:55:55.833765 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:55.935045 kubelet[2230]: E1008 19:55:55.934946 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:56.035248 kubelet[2230]: E1008 19:55:56.035083 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:56.135482 kubelet[2230]: E1008 19:55:56.135409 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:56.236170 kubelet[2230]: E1008 19:55:56.236080 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:56.336853 kubelet[2230]: E1008 19:55:56.336714 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:56.437419 kubelet[2230]: E1008 19:55:56.437364 2230 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:56.989854 kubelet[2230]: I1008 19:55:56.989798 2230 apiserver.go:52] "Watching apiserver" Oct 8 19:55:56.992522 kubelet[2230]: I1008 19:55:56.992505 2230 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:55:57.687156 systemd[1]: Reloading requested from client PID 2515 ('systemctl') (unit session-7.scope)... Oct 8 19:55:57.687180 systemd[1]: Reloading... Oct 8 19:55:57.784127 zram_generator::config[2554]: No configuration found. Oct 8 19:55:57.897432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:55:57.989426 systemd[1]: Reloading finished in 301 ms. Oct 8 19:55:58.032481 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:58.050525 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:55:58.050850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:58.060295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:58.201398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:58.207024 (kubelet)[2599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:55:58.258753 kubelet[2599]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:55:58.258753 kubelet[2599]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:55:58.258753 kubelet[2599]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:55:58.258753 kubelet[2599]: I1008 19:55:58.258072 2599 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:55:58.266422 kubelet[2599]: I1008 19:55:58.266365 2599 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:55:58.266422 kubelet[2599]: I1008 19:55:58.266414 2599 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:55:58.266798 kubelet[2599]: I1008 19:55:58.266765 2599 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:55:58.268339 kubelet[2599]: I1008 19:55:58.268312 2599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:55:58.271169 kubelet[2599]: I1008 19:55:58.271138 2599 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:55:58.281489 kubelet[2599]: I1008 19:55:58.281434 2599 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:55:58.281737 kubelet[2599]: I1008 19:55:58.281709 2599 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:55:58.282115 kubelet[2599]: I1008 19:55:58.281872 2599 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:55:58.282115 kubelet[2599]: I1008 19:55:58.281901 2599 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:55:58.282115 kubelet[2599]: I1008 19:55:58.281911 2599 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:55:58.282115 kubelet[2599]: I1008 19:55:58.281939 2599 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:55:58.282115 kubelet[2599]: I1008 19:55:58.282025 2599 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:55:58.282115 kubelet[2599]: I1008 19:55:58.282044 2599 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:55:58.282115 kubelet[2599]: I1008 19:55:58.282070 2599 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:55:58.282411 kubelet[2599]: I1008 19:55:58.282080 2599 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:55:58.286455 kubelet[2599]: I1008 19:55:58.286416 2599 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:55:58.286688 kubelet[2599]: I1008 19:55:58.286664 2599 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:55:58.287248 kubelet[2599]: I1008 19:55:58.287222 2599 server.go:1256] "Started kubelet" Oct 8 19:55:58.290875 kubelet[2599]: I1008 19:55:58.289660 2599 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:55:58.290875 kubelet[2599]: I1008 19:55:58.289948 2599 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:55:58.290875 kubelet[2599]: I1008 19:55:58.289993 2599 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:55:58.291083 kubelet[2599]: I1008 19:55:58.290922 2599 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:55:58.292525 kubelet[2599]: I1008 19:55:58.292470 2599 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:55:58.296268 kubelet[2599]: I1008 19:55:58.296207 2599 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:55:58.296839 kubelet[2599]: I1008 19:55:58.296791 2599 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:55:58.297414 kubelet[2599]: I1008 19:55:58.297133 2599 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:55:58.297972 kubelet[2599]: I1008 19:55:58.297945 2599 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:55:58.299422 kubelet[2599]: I1008 19:55:58.299389 2599 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:55:58.303460 kubelet[2599]: E1008 19:55:58.303405 2599 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:55:58.305536 kubelet[2599]: I1008 19:55:58.305462 2599 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:55:58.312647 kubelet[2599]: I1008 19:55:58.312602 2599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:55:58.314264 kubelet[2599]: I1008 19:55:58.314238 2599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:55:58.314306 kubelet[2599]: I1008 19:55:58.314276 2599 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:55:58.314306 kubelet[2599]: I1008 19:55:58.314303 2599 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:55:58.314386 kubelet[2599]: E1008 19:55:58.314362 2599 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:55:58.345790 kubelet[2599]: I1008 19:55:58.345761 2599 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:55:58.345960 kubelet[2599]: I1008 19:55:58.345952 2599 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:55:58.346042 kubelet[2599]: I1008 19:55:58.346032 2599 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:55:58.346255 kubelet[2599]: I1008 19:55:58.346243 2599 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:55:58.346317 kubelet[2599]: I1008 19:55:58.346309 2599 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:55:58.346360 kubelet[2599]: I1008 19:55:58.346353 2599 policy_none.go:49] "None policy: Start" Oct 8 19:55:58.347201 kubelet[2599]: I1008 19:55:58.347177 2599 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:55:58.347249 kubelet[2599]: I1008 19:55:58.347212 2599 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:55:58.347396 kubelet[2599]: I1008 19:55:58.347381 2599 state_mem.go:75] "Updated machine memory state" Oct 8 19:55:58.353201 kubelet[2599]: I1008 19:55:58.352873 2599 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:55:58.353480 kubelet[2599]: I1008 19:55:58.353314 2599 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:55:58.415563 kubelet[2599]: I1008 19:55:58.415483 2599 topology_manager.go:215] "Topology Admit Handler" podUID="acbf5f7f51337423f1bae2d703422297" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:55:58.415711 kubelet[2599]: I1008 19:55:58.415601 2599 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:55:58.415711 kubelet[2599]: I1008 19:55:58.415645 2599 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:55:58.459447 kubelet[2599]: I1008 19:55:58.459409 2599 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:58.469474 kubelet[2599]: I1008 19:55:58.469399 2599 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:55:58.469615 kubelet[2599]: I1008 19:55:58.469502 2599 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:55:58.499053 kubelet[2599]: I1008 19:55:58.498995 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acbf5f7f51337423f1bae2d703422297-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"acbf5f7f51337423f1bae2d703422297\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:58.499053 kubelet[2599]: I1008 19:55:58.499058 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:58.499240 kubelet[2599]: I1008 19:55:58.499116 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:58.499240 kubelet[2599]: I1008 19:55:58.499147 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:58.499240 kubelet[2599]: I1008 19:55:58.499185 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:58.499240 kubelet[2599]: I1008 19:55:58.499216 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acbf5f7f51337423f1bae2d703422297-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"acbf5f7f51337423f1bae2d703422297\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:58.499240 kubelet[2599]: I1008 19:55:58.499243 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acbf5f7f51337423f1bae2d703422297-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"acbf5f7f51337423f1bae2d703422297\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:58.499421 kubelet[2599]: I1008 19:55:58.499270 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:58.499421 kubelet[2599]: I1008 19:55:58.499298 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:55:58.724886 kubelet[2599]: E1008 19:55:58.724851 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:58.727242 kubelet[2599]: E1008 19:55:58.727221 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:58.727552 kubelet[2599]: E1008 19:55:58.727527 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:58.799499 update_engine[1458]: I20241008 19:55:58.799433 1458 update_attempter.cc:509] Updating boot flags... Oct 8 19:55:59.283289 kubelet[2599]: I1008 19:55:59.283242 2599 apiserver.go:52] "Watching apiserver" Oct 8 19:55:59.297756 kubelet[2599]: I1008 19:55:59.297727 2599 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:55:59.327429 kubelet[2599]: E1008 19:55:59.327395 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:59.430067 kubelet[2599]: E1008 19:55:59.429992 2599 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:59.430478 kubelet[2599]: E1008 19:55:59.430445 2599 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:59.430555 kubelet[2599]: E1008 19:55:59.430527 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:59.430915 kubelet[2599]: E1008 19:55:59.430889 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:59.518141 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2644) Oct 8 19:55:59.566134 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2647) Oct 8 19:55:59.634143 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2647) Oct 8 19:55:59.651810 kubelet[2599]: I1008 19:55:59.649026 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.648784943 podStartE2EDuration="1.648784943s" podCreationTimestamp="2024-10-08 19:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:55:59.648192218 +0000 UTC m=+1.435899363" watchObservedRunningTime="2024-10-08 19:55:59.648784943 +0000 UTC m=+1.436492078" Oct 8 19:55:59.896790 kubelet[2599]: I1008 19:55:59.896560 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8965146179999999 podStartE2EDuration="1.896514618s" podCreationTimestamp="2024-10-08 19:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:55:59.881250314 +0000 UTC m=+1.668957449" watchObservedRunningTime="2024-10-08 19:55:59.896514618 +0000 UTC m=+1.684221753" Oct 8 19:55:59.910754 kubelet[2599]: I1008 19:55:59.909806 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.909737298 podStartE2EDuration="1.909737298s" podCreationTimestamp="2024-10-08 19:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:55:59.896758441 +0000 UTC m=+1.684465576" watchObservedRunningTime="2024-10-08 19:55:59.909737298 +0000 UTC m=+1.697444433" Oct 8 19:56:00.329379 kubelet[2599]: E1008 19:56:00.329248 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:00.329379 kubelet[2599]: E1008 19:56:00.329320 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:01.331948 kubelet[2599]: E1008 19:56:01.331896 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:03.364073 kubelet[2599]: E1008 19:56:03.364038 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:03.463149 sudo[1651]: pam_unix(sudo:session): session closed for user root Oct 8 19:56:03.465112 sshd[1648]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:03.468895 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:43122.service: Deactivated successfully. Oct 8 19:56:03.470605 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:56:03.470773 systemd[1]: session-7.scope: Consumed 5.461s CPU time, 192.6M memory peak, 0B memory swap peak. Oct 8 19:56:03.471211 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:56:03.472005 systemd-logind[1457]: Removed session 7. Oct 8 19:56:04.336109 kubelet[2599]: E1008 19:56:04.336065 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:05.045427 kubelet[2599]: E1008 19:56:05.045374 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:05.341021 kubelet[2599]: E1008 19:56:05.338905 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:05.341427 kubelet[2599]: E1008 19:56:05.341204 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:06.887229 kubelet[2599]: E1008 19:56:06.887202 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:07.341007 kubelet[2599]: E1008 19:56:07.340856 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:13.007866 kubelet[2599]: I1008 19:56:13.007820 2599 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:56:13.008485 containerd[1470]: time="2024-10-08T19:56:13.008272720Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:56:13.009056 kubelet[2599]: I1008 19:56:13.009029 2599 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:56:13.068777 kubelet[2599]: I1008 19:56:13.068736 2599 topology_manager.go:215] "Topology Admit Handler" podUID="cc98770b-7ae5-4a73-91b8-ff53e0959210" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-j9x7s" Oct 8 19:56:13.070643 kubelet[2599]: W1008 19:56:13.070586 2599 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Oct 8 19:56:13.070643 kubelet[2599]: E1008 19:56:13.070624 2599 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Oct 8 19:56:13.070847 kubelet[2599]: W1008 19:56:13.070664 2599 reflector.go:539] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Oct 8 19:56:13.070847 kubelet[2599]: E1008 19:56:13.070679 2599 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Oct 8 19:56:13.074995 systemd[1]: Created slice kubepods-besteffort-podcc98770b_7ae5_4a73_91b8_ff53e0959210.slice - libcontainer container kubepods-besteffort-podcc98770b_7ae5_4a73_91b8_ff53e0959210.slice. Oct 8 19:56:13.092365 kubelet[2599]: I1008 19:56:13.092335 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cc98770b-7ae5-4a73-91b8-ff53e0959210-var-lib-calico\") pod \"tigera-operator-5d56685c77-j9x7s\" (UID: \"cc98770b-7ae5-4a73-91b8-ff53e0959210\") " pod="tigera-operator/tigera-operator-5d56685c77-j9x7s" Oct 8 19:56:13.092481 kubelet[2599]: I1008 19:56:13.092386 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jc75\" (UniqueName: \"kubernetes.io/projected/cc98770b-7ae5-4a73-91b8-ff53e0959210-kube-api-access-6jc75\") pod \"tigera-operator-5d56685c77-j9x7s\" (UID: \"cc98770b-7ae5-4a73-91b8-ff53e0959210\") " pod="tigera-operator/tigera-operator-5d56685c77-j9x7s" Oct 8 19:56:13.604066 kubelet[2599]: I1008 19:56:13.603334 2599 topology_manager.go:215] "Topology Admit Handler" podUID="42accd2f-c4ac-4e4f-8ea1-540c1de126be" podNamespace="kube-system" podName="kube-proxy-m6c5w" Oct 8 19:56:13.610510 systemd[1]: Created slice kubepods-besteffort-pod42accd2f_c4ac_4e4f_8ea1_540c1de126be.slice - libcontainer container kubepods-besteffort-pod42accd2f_c4ac_4e4f_8ea1_540c1de126be.slice. Oct 8 19:56:13.696260 kubelet[2599]: I1008 19:56:13.696200 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfc4w\" (UniqueName: \"kubernetes.io/projected/42accd2f-c4ac-4e4f-8ea1-540c1de126be-kube-api-access-rfc4w\") pod \"kube-proxy-m6c5w\" (UID: \"42accd2f-c4ac-4e4f-8ea1-540c1de126be\") " pod="kube-system/kube-proxy-m6c5w" Oct 8 19:56:13.696260 kubelet[2599]: I1008 19:56:13.696257 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42accd2f-c4ac-4e4f-8ea1-540c1de126be-lib-modules\") pod \"kube-proxy-m6c5w\" (UID: \"42accd2f-c4ac-4e4f-8ea1-540c1de126be\") " pod="kube-system/kube-proxy-m6c5w" Oct 8 19:56:13.696526 kubelet[2599]: I1008 19:56:13.696295 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42accd2f-c4ac-4e4f-8ea1-540c1de126be-xtables-lock\") pod \"kube-proxy-m6c5w\" (UID: \"42accd2f-c4ac-4e4f-8ea1-540c1de126be\") " pod="kube-system/kube-proxy-m6c5w" Oct 8 19:56:13.696526 kubelet[2599]: I1008 19:56:13.696367 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42accd2f-c4ac-4e4f-8ea1-540c1de126be-kube-proxy\") pod \"kube-proxy-m6c5w\" (UID: \"42accd2f-c4ac-4e4f-8ea1-540c1de126be\") " pod="kube-system/kube-proxy-m6c5w" Oct 8 19:56:13.811928 kubelet[2599]: E1008 19:56:13.811872 2599 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 19:56:13.811928 kubelet[2599]: E1008 19:56:13.811912 2599 projected.go:200] Error preparing data for projected volume kube-api-access-rfc4w for pod kube-system/kube-proxy-m6c5w: configmap "kube-root-ca.crt" not found Oct 8 19:56:13.812137 kubelet[2599]: E1008 19:56:13.812015 2599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/42accd2f-c4ac-4e4f-8ea1-540c1de126be-kube-api-access-rfc4w podName:42accd2f-c4ac-4e4f-8ea1-540c1de126be nodeName:}" failed. No retries permitted until 2024-10-08 19:56:14.311967386 +0000 UTC m=+16.099674521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rfc4w" (UniqueName: "kubernetes.io/projected/42accd2f-c4ac-4e4f-8ea1-540c1de126be-kube-api-access-rfc4w") pod "kube-proxy-m6c5w" (UID: "42accd2f-c4ac-4e4f-8ea1-540c1de126be") : configmap "kube-root-ca.crt" not found Oct 8 19:56:14.198571 kubelet[2599]: E1008 19:56:14.198526 2599 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:56:14.198571 kubelet[2599]: E1008 19:56:14.198561 2599 projected.go:200] Error preparing data for projected volume kube-api-access-6jc75 for pod tigera-operator/tigera-operator-5d56685c77-j9x7s: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:56:14.199010 kubelet[2599]: E1008 19:56:14.198611 2599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc98770b-7ae5-4a73-91b8-ff53e0959210-kube-api-access-6jc75 podName:cc98770b-7ae5-4a73-91b8-ff53e0959210 nodeName:}" failed. No retries permitted until 2024-10-08 19:56:14.698593348 +0000 UTC m=+16.486300483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6jc75" (UniqueName: "kubernetes.io/projected/cc98770b-7ae5-4a73-91b8-ff53e0959210-kube-api-access-6jc75") pod "tigera-operator-5d56685c77-j9x7s" (UID: "cc98770b-7ae5-4a73-91b8-ff53e0959210") : failed to sync configmap cache: timed out waiting for the condition Oct 8 19:56:14.513109 kubelet[2599]: E1008 19:56:14.512982 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:14.513662 containerd[1470]: time="2024-10-08T19:56:14.513611731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6c5w,Uid:42accd2f-c4ac-4e4f-8ea1-540c1de126be,Namespace:kube-system,Attempt:0,}" Oct 8 19:56:14.541482 containerd[1470]: time="2024-10-08T19:56:14.541381040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:14.541482 containerd[1470]: time="2024-10-08T19:56:14.541439960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:14.541482 containerd[1470]: time="2024-10-08T19:56:14.541453876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:14.541671 containerd[1470]: time="2024-10-08T19:56:14.541543706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:14.567213 systemd[1]: Started cri-containerd-08519f2b51301cb8910596a4554ba524d4991e291086c6a1d9ee5bc3ff6590cb.scope - libcontainer container 08519f2b51301cb8910596a4554ba524d4991e291086c6a1d9ee5bc3ff6590cb. Oct 8 19:56:14.589213 containerd[1470]: time="2024-10-08T19:56:14.589154015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6c5w,Uid:42accd2f-c4ac-4e4f-8ea1-540c1de126be,Namespace:kube-system,Attempt:0,} returns sandbox id \"08519f2b51301cb8910596a4554ba524d4991e291086c6a1d9ee5bc3ff6590cb\"" Oct 8 19:56:14.589882 kubelet[2599]: E1008 19:56:14.589852 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:14.592120 containerd[1470]: time="2024-10-08T19:56:14.592063186Z" level=info msg="CreateContainer within sandbox \"08519f2b51301cb8910596a4554ba524d4991e291086c6a1d9ee5bc3ff6590cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:56:14.610895 containerd[1470]: time="2024-10-08T19:56:14.610838198Z" level=info msg="CreateContainer within sandbox \"08519f2b51301cb8910596a4554ba524d4991e291086c6a1d9ee5bc3ff6590cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0696f4b8c89857edb45df503b390769c1f54ab79194f87f129538f9c436d2931\"" Oct 8 19:56:14.611353 containerd[1470]: time="2024-10-08T19:56:14.611322350Z" level=info msg="StartContainer for \"0696f4b8c89857edb45df503b390769c1f54ab79194f87f129538f9c436d2931\"" Oct 8 19:56:14.639216 systemd[1]: Started cri-containerd-0696f4b8c89857edb45df503b390769c1f54ab79194f87f129538f9c436d2931.scope - libcontainer container 0696f4b8c89857edb45df503b390769c1f54ab79194f87f129538f9c436d2931. Oct 8 19:56:14.671637 containerd[1470]: time="2024-10-08T19:56:14.671594154Z" level=info msg="StartContainer for \"0696f4b8c89857edb45df503b390769c1f54ab79194f87f129538f9c436d2931\" returns successfully" Oct 8 19:56:14.885997 containerd[1470]: time="2024-10-08T19:56:14.885946128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-j9x7s,Uid:cc98770b-7ae5-4a73-91b8-ff53e0959210,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:56:14.921729 containerd[1470]: time="2024-10-08T19:56:14.921303803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:14.921729 containerd[1470]: time="2024-10-08T19:56:14.921375809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:14.921729 containerd[1470]: time="2024-10-08T19:56:14.921390597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:14.921729 containerd[1470]: time="2024-10-08T19:56:14.921514761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:14.945302 systemd[1]: Started cri-containerd-e903b025f8f02fc531391a69eea42b93c6bd36d8c368334261e9ace75f37bae8.scope - libcontainer container e903b025f8f02fc531391a69eea42b93c6bd36d8c368334261e9ace75f37bae8. Oct 8 19:56:14.983995 containerd[1470]: time="2024-10-08T19:56:14.983935483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-j9x7s,Uid:cc98770b-7ae5-4a73-91b8-ff53e0959210,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e903b025f8f02fc531391a69eea42b93c6bd36d8c368334261e9ace75f37bae8\"" Oct 8 19:56:14.989944 containerd[1470]: time="2024-10-08T19:56:14.989202586Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:56:15.353692 kubelet[2599]: E1008 19:56:15.353565 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:15.361159 kubelet[2599]: I1008 19:56:15.361127 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-m6c5w" podStartSLOduration=2.361062204 podStartE2EDuration="2.361062204s" podCreationTimestamp="2024-10-08 19:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:56:15.360952107 +0000 UTC m=+17.148659242" watchObservedRunningTime="2024-10-08 19:56:15.361062204 +0000 UTC m=+17.148769339" Oct 8 19:56:16.104291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount145316397.mount: Deactivated successfully. Oct 8 19:56:16.511475 containerd[1470]: time="2024-10-08T19:56:16.511417189Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:16.512370 containerd[1470]: time="2024-10-08T19:56:16.512308538Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136517" Oct 8 19:56:16.513386 containerd[1470]: time="2024-10-08T19:56:16.513345279Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:16.516002 containerd[1470]: time="2024-10-08T19:56:16.515967107Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:16.516867 containerd[1470]: time="2024-10-08T19:56:16.516831644Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.527589885s" Oct 8 19:56:16.516893 containerd[1470]: time="2024-10-08T19:56:16.516867391Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 19:56:16.518679 containerd[1470]: time="2024-10-08T19:56:16.518638136Z" level=info msg="CreateContainer within sandbox \"e903b025f8f02fc531391a69eea42b93c6bd36d8c368334261e9ace75f37bae8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:56:16.530732 containerd[1470]: time="2024-10-08T19:56:16.530676675Z" level=info msg="CreateContainer within sandbox \"e903b025f8f02fc531391a69eea42b93c6bd36d8c368334261e9ace75f37bae8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f5e3b88510a309d7c28a3240225b4e9f92bcff743379262d4160456df2381918\"" Oct 8 19:56:16.531296 containerd[1470]: time="2024-10-08T19:56:16.531162760Z" level=info msg="StartContainer for \"f5e3b88510a309d7c28a3240225b4e9f92bcff743379262d4160456df2381918\"" Oct 8 19:56:16.566221 systemd[1]: Started cri-containerd-f5e3b88510a309d7c28a3240225b4e9f92bcff743379262d4160456df2381918.scope - libcontainer container f5e3b88510a309d7c28a3240225b4e9f92bcff743379262d4160456df2381918. Oct 8 19:56:16.599848 containerd[1470]: time="2024-10-08T19:56:16.599792938Z" level=info msg="StartContainer for \"f5e3b88510a309d7c28a3240225b4e9f92bcff743379262d4160456df2381918\" returns successfully" Oct 8 19:56:17.366919 kubelet[2599]: I1008 19:56:17.366867 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-j9x7s" podStartSLOduration=2.835340586 podStartE2EDuration="4.366807474s" podCreationTimestamp="2024-10-08 19:56:13 +0000 UTC" firstStartedPulling="2024-10-08 19:56:14.985729293 +0000 UTC m=+16.773436428" lastFinishedPulling="2024-10-08 19:56:16.517196181 +0000 UTC m=+18.304903316" observedRunningTime="2024-10-08 19:56:17.366407771 +0000 UTC m=+19.154114926" watchObservedRunningTime="2024-10-08 19:56:17.366807474 +0000 UTC m=+19.154514609" Oct 8 19:56:19.459312 kubelet[2599]: I1008 19:56:19.459258 2599 topology_manager.go:215] "Topology Admit Handler" podUID="2498188c-2e18-456a-96f4-0b4becf6fd67" podNamespace="calico-system" podName="calico-typha-5898457fd7-sd9wz" Oct 8 19:56:19.472373 systemd[1]: Created slice kubepods-besteffort-pod2498188c_2e18_456a_96f4_0b4becf6fd67.slice - libcontainer container kubepods-besteffort-pod2498188c_2e18_456a_96f4_0b4becf6fd67.slice. Oct 8 19:56:19.510109 kubelet[2599]: I1008 19:56:19.508445 2599 topology_manager.go:215] "Topology Admit Handler" podUID="1c624f67-5f50-41ec-82c1-e629f33226bc" podNamespace="calico-system" podName="calico-node-5b7js" Oct 8 19:56:19.517351 systemd[1]: Created slice kubepods-besteffort-pod1c624f67_5f50_41ec_82c1_e629f33226bc.slice - libcontainer container kubepods-besteffort-pod1c624f67_5f50_41ec_82c1_e629f33226bc.slice. Oct 8 19:56:19.530444 kubelet[2599]: I1008 19:56:19.530402 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1c624f67-5f50-41ec-82c1-e629f33226bc-flexvol-driver-host\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530444 kubelet[2599]: I1008 19:56:19.530448 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt4xq\" (UniqueName: \"kubernetes.io/projected/2498188c-2e18-456a-96f4-0b4becf6fd67-kube-api-access-lt4xq\") pod \"calico-typha-5898457fd7-sd9wz\" (UID: \"2498188c-2e18-456a-96f4-0b4becf6fd67\") " pod="calico-system/calico-typha-5898457fd7-sd9wz" Oct 8 19:56:19.530665 kubelet[2599]: I1008 19:56:19.530467 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49hph\" (UniqueName: \"kubernetes.io/projected/1c624f67-5f50-41ec-82c1-e629f33226bc-kube-api-access-49hph\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530665 kubelet[2599]: I1008 19:56:19.530486 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1c624f67-5f50-41ec-82c1-e629f33226bc-cni-net-dir\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530665 kubelet[2599]: I1008 19:56:19.530504 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c624f67-5f50-41ec-82c1-e629f33226bc-lib-modules\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530665 kubelet[2599]: I1008 19:56:19.530585 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1c624f67-5f50-41ec-82c1-e629f33226bc-node-certs\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530665 kubelet[2599]: I1008 19:56:19.530659 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2498188c-2e18-456a-96f4-0b4becf6fd67-tigera-ca-bundle\") pod \"calico-typha-5898457fd7-sd9wz\" (UID: \"2498188c-2e18-456a-96f4-0b4becf6fd67\") " pod="calico-system/calico-typha-5898457fd7-sd9wz" Oct 8 19:56:19.530830 kubelet[2599]: I1008 19:56:19.530685 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1c624f67-5f50-41ec-82c1-e629f33226bc-policysync\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530830 kubelet[2599]: I1008 19:56:19.530710 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1c624f67-5f50-41ec-82c1-e629f33226bc-cni-bin-dir\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530830 kubelet[2599]: I1008 19:56:19.530742 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2498188c-2e18-456a-96f4-0b4becf6fd67-typha-certs\") pod \"calico-typha-5898457fd7-sd9wz\" (UID: \"2498188c-2e18-456a-96f4-0b4becf6fd67\") " pod="calico-system/calico-typha-5898457fd7-sd9wz" Oct 8 19:56:19.530830 kubelet[2599]: I1008 19:56:19.530761 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c624f67-5f50-41ec-82c1-e629f33226bc-var-lib-calico\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530830 kubelet[2599]: I1008 19:56:19.530786 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1c624f67-5f50-41ec-82c1-e629f33226bc-cni-log-dir\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530989 kubelet[2599]: I1008 19:56:19.530814 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c624f67-5f50-41ec-82c1-e629f33226bc-xtables-lock\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530989 kubelet[2599]: I1008 19:56:19.530835 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c624f67-5f50-41ec-82c1-e629f33226bc-tigera-ca-bundle\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.530989 kubelet[2599]: I1008 19:56:19.530891 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1c624f67-5f50-41ec-82c1-e629f33226bc-var-run-calico\") pod \"calico-node-5b7js\" (UID: \"1c624f67-5f50-41ec-82c1-e629f33226bc\") " pod="calico-system/calico-node-5b7js" Oct 8 19:56:19.622557 kubelet[2599]: I1008 19:56:19.621365 2599 topology_manager.go:215] "Topology Admit Handler" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" podNamespace="calico-system" podName="csi-node-driver-g2n5q" Oct 8 19:56:19.622557 kubelet[2599]: E1008 19:56:19.621669 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:19.631246 kubelet[2599]: I1008 19:56:19.631199 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36b7e211-8774-47a4-847d-9ea19c0b13c3-kubelet-dir\") pod \"csi-node-driver-g2n5q\" (UID: \"36b7e211-8774-47a4-847d-9ea19c0b13c3\") " pod="calico-system/csi-node-driver-g2n5q" Oct 8 19:56:19.631399 kubelet[2599]: I1008 19:56:19.631288 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/36b7e211-8774-47a4-847d-9ea19c0b13c3-varrun\") pod \"csi-node-driver-g2n5q\" (UID: \"36b7e211-8774-47a4-847d-9ea19c0b13c3\") " pod="calico-system/csi-node-driver-g2n5q" Oct 8 19:56:19.631399 kubelet[2599]: I1008 19:56:19.631339 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/36b7e211-8774-47a4-847d-9ea19c0b13c3-socket-dir\") pod \"csi-node-driver-g2n5q\" (UID: \"36b7e211-8774-47a4-847d-9ea19c0b13c3\") " pod="calico-system/csi-node-driver-g2n5q" Oct 8 19:56:19.631896 kubelet[2599]: I1008 19:56:19.631556 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/36b7e211-8774-47a4-847d-9ea19c0b13c3-registration-dir\") pod \"csi-node-driver-g2n5q\" (UID: \"36b7e211-8774-47a4-847d-9ea19c0b13c3\") " pod="calico-system/csi-node-driver-g2n5q" Oct 8 19:56:19.631896 kubelet[2599]: I1008 19:56:19.631616 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wtrt\" (UniqueName: \"kubernetes.io/projected/36b7e211-8774-47a4-847d-9ea19c0b13c3-kube-api-access-5wtrt\") pod \"csi-node-driver-g2n5q\" (UID: \"36b7e211-8774-47a4-847d-9ea19c0b13c3\") " pod="calico-system/csi-node-driver-g2n5q" Oct 8 19:56:19.636130 kubelet[2599]: E1008 19:56:19.636019 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.636130 kubelet[2599]: W1008 19:56:19.636041 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.636130 kubelet[2599]: E1008 19:56:19.636067 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.637991 kubelet[2599]: E1008 19:56:19.637955 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.637991 kubelet[2599]: W1008 19:56:19.637985 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.638121 kubelet[2599]: E1008 19:56:19.638018 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.638376 kubelet[2599]: E1008 19:56:19.638345 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.638376 kubelet[2599]: W1008 19:56:19.638367 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.638477 kubelet[2599]: E1008 19:56:19.638386 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.652840 kubelet[2599]: E1008 19:56:19.652774 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.652840 kubelet[2599]: W1008 19:56:19.652795 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.652840 kubelet[2599]: E1008 19:56:19.652822 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.658439 kubelet[2599]: E1008 19:56:19.658199 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.658439 kubelet[2599]: W1008 19:56:19.658217 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.658439 kubelet[2599]: E1008 19:56:19.658258 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.659464 kubelet[2599]: E1008 19:56:19.659317 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.659464 kubelet[2599]: W1008 19:56:19.659340 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.659464 kubelet[2599]: E1008 19:56:19.659364 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.732482 kubelet[2599]: E1008 19:56:19.732333 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.732482 kubelet[2599]: W1008 19:56:19.732357 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.732482 kubelet[2599]: E1008 19:56:19.732379 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.733957 kubelet[2599]: E1008 19:56:19.732626 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.733957 kubelet[2599]: W1008 19:56:19.732635 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.733957 kubelet[2599]: E1008 19:56:19.732649 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.733957 kubelet[2599]: E1008 19:56:19.732845 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.733957 kubelet[2599]: W1008 19:56:19.732852 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.733957 kubelet[2599]: E1008 19:56:19.732861 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.733957 kubelet[2599]: E1008 19:56:19.733014 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.733957 kubelet[2599]: W1008 19:56:19.733021 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.733957 kubelet[2599]: E1008 19:56:19.733030 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.733957 kubelet[2599]: E1008 19:56:19.733202 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.734296 kubelet[2599]: W1008 19:56:19.733212 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.734296 kubelet[2599]: E1008 19:56:19.733221 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.734296 kubelet[2599]: E1008 19:56:19.733405 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.734296 kubelet[2599]: W1008 19:56:19.733411 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.734296 kubelet[2599]: E1008 19:56:19.733421 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.734296 kubelet[2599]: E1008 19:56:19.733562 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.734296 kubelet[2599]: W1008 19:56:19.733568 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.734296 kubelet[2599]: E1008 19:56:19.733578 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.734296 kubelet[2599]: E1008 19:56:19.733784 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.734296 kubelet[2599]: W1008 19:56:19.733791 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.734596 kubelet[2599]: E1008 19:56:19.733800 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.734596 kubelet[2599]: E1008 19:56:19.734002 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.734596 kubelet[2599]: W1008 19:56:19.734010 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.734596 kubelet[2599]: E1008 19:56:19.734022 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.734596 kubelet[2599]: E1008 19:56:19.734211 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.734596 kubelet[2599]: W1008 19:56:19.734217 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.734596 kubelet[2599]: E1008 19:56:19.734228 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.734596 kubelet[2599]: E1008 19:56:19.734537 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.734596 kubelet[2599]: W1008 19:56:19.734560 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.734872 kubelet[2599]: E1008 19:56:19.734723 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.736020 kubelet[2599]: E1008 19:56:19.735069 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.736020 kubelet[2599]: W1008 19:56:19.735082 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.736020 kubelet[2599]: E1008 19:56:19.735172 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.736020 kubelet[2599]: E1008 19:56:19.735467 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.736020 kubelet[2599]: W1008 19:56:19.735476 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.736020 kubelet[2599]: E1008 19:56:19.735552 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.736020 kubelet[2599]: E1008 19:56:19.735824 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.736020 kubelet[2599]: W1008 19:56:19.735835 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.736020 kubelet[2599]: E1008 19:56:19.735917 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.736434 kubelet[2599]: E1008 19:56:19.736206 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.736434 kubelet[2599]: W1008 19:56:19.736217 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.736434 kubelet[2599]: E1008 19:56:19.736290 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.736665 kubelet[2599]: E1008 19:56:19.736639 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.736665 kubelet[2599]: W1008 19:56:19.736658 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739113 kubelet[2599]: E1008 19:56:19.736785 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.739113 kubelet[2599]: E1008 19:56:19.736995 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.739113 kubelet[2599]: W1008 19:56:19.737016 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739113 kubelet[2599]: E1008 19:56:19.737131 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.739113 kubelet[2599]: E1008 19:56:19.737340 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.739113 kubelet[2599]: W1008 19:56:19.737350 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739113 kubelet[2599]: E1008 19:56:19.737491 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.739113 kubelet[2599]: E1008 19:56:19.737638 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.739113 kubelet[2599]: W1008 19:56:19.737647 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739113 kubelet[2599]: E1008 19:56:19.737729 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.739444 kubelet[2599]: E1008 19:56:19.738150 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.739444 kubelet[2599]: W1008 19:56:19.738161 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739444 kubelet[2599]: E1008 19:56:19.738189 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.739444 kubelet[2599]: E1008 19:56:19.738481 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.739444 kubelet[2599]: W1008 19:56:19.738490 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739444 kubelet[2599]: E1008 19:56:19.738553 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.739444 kubelet[2599]: E1008 19:56:19.738792 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.739444 kubelet[2599]: W1008 19:56:19.738804 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739444 kubelet[2599]: E1008 19:56:19.738889 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.739444 kubelet[2599]: E1008 19:56:19.739101 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.739911 kubelet[2599]: W1008 19:56:19.739113 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739911 kubelet[2599]: E1008 19:56:19.739132 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.739911 kubelet[2599]: E1008 19:56:19.739588 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.739911 kubelet[2599]: W1008 19:56:19.739603 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739911 kubelet[2599]: E1008 19:56:19.739626 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.739911 kubelet[2599]: E1008 19:56:19.739874 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.739911 kubelet[2599]: W1008 19:56:19.739884 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.739911 kubelet[2599]: E1008 19:56:19.739897 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.748254 kubelet[2599]: E1008 19:56:19.748228 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:19.748472 kubelet[2599]: W1008 19:56:19.748409 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:19.748472 kubelet[2599]: E1008 19:56:19.748437 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:19.779375 kubelet[2599]: E1008 19:56:19.779344 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:19.779712 containerd[1470]: time="2024-10-08T19:56:19.779676348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5898457fd7-sd9wz,Uid:2498188c-2e18-456a-96f4-0b4becf6fd67,Namespace:calico-system,Attempt:0,}" Oct 8 19:56:19.808346 containerd[1470]: time="2024-10-08T19:56:19.808169282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:19.808346 containerd[1470]: time="2024-10-08T19:56:19.808221089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:19.808346 containerd[1470]: time="2024-10-08T19:56:19.808231689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:19.808646 containerd[1470]: time="2024-10-08T19:56:19.808570606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:19.820084 kubelet[2599]: E1008 19:56:19.820034 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:19.821231 containerd[1470]: time="2024-10-08T19:56:19.820548605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5b7js,Uid:1c624f67-5f50-41ec-82c1-e629f33226bc,Namespace:calico-system,Attempt:0,}" Oct 8 19:56:19.833261 systemd[1]: Started cri-containerd-9025b164e6dcad414da8573bdbfab2bbe068bd60757afcd8f3ae9d6d2a459ba6.scope - libcontainer container 9025b164e6dcad414da8573bdbfab2bbe068bd60757afcd8f3ae9d6d2a459ba6. Oct 8 19:56:19.870551 containerd[1470]: time="2024-10-08T19:56:19.870500346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5898457fd7-sd9wz,Uid:2498188c-2e18-456a-96f4-0b4becf6fd67,Namespace:calico-system,Attempt:0,} returns sandbox id \"9025b164e6dcad414da8573bdbfab2bbe068bd60757afcd8f3ae9d6d2a459ba6\"" Oct 8 19:56:19.871546 kubelet[2599]: E1008 19:56:19.871526 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:19.872658 containerd[1470]: time="2024-10-08T19:56:19.872635353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:56:20.109530 containerd[1470]: time="2024-10-08T19:56:20.109330354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:20.109530 containerd[1470]: time="2024-10-08T19:56:20.109412709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:20.109530 containerd[1470]: time="2024-10-08T19:56:20.109424511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:20.109768 containerd[1470]: time="2024-10-08T19:56:20.109522957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:20.137412 systemd[1]: Started cri-containerd-44acd38727426056785f80a137eaf7060a2e5ae899b46ca1ad51dcf53d836a3d.scope - libcontainer container 44acd38727426056785f80a137eaf7060a2e5ae899b46ca1ad51dcf53d836a3d. Oct 8 19:56:20.172447 containerd[1470]: time="2024-10-08T19:56:20.172391347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5b7js,Uid:1c624f67-5f50-41ec-82c1-e629f33226bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"44acd38727426056785f80a137eaf7060a2e5ae899b46ca1ad51dcf53d836a3d\"" Oct 8 19:56:20.175802 kubelet[2599]: E1008 19:56:20.173745 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:21.314738 kubelet[2599]: E1008 19:56:21.314663 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:23.315499 kubelet[2599]: E1008 19:56:23.315448 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:23.573391 containerd[1470]: time="2024-10-08T19:56:23.573272778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:23.574168 containerd[1470]: time="2024-10-08T19:56:23.574072231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 19:56:23.575207 containerd[1470]: time="2024-10-08T19:56:23.575154586Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:23.577178 containerd[1470]: time="2024-10-08T19:56:23.577132024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:23.577746 containerd[1470]: time="2024-10-08T19:56:23.577694101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.705028872s" Oct 8 19:56:23.577746 containerd[1470]: time="2024-10-08T19:56:23.577734406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 19:56:23.578302 containerd[1470]: time="2024-10-08T19:56:23.578272328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:56:23.586339 containerd[1470]: time="2024-10-08T19:56:23.586274284Z" level=info msg="CreateContainer within sandbox \"9025b164e6dcad414da8573bdbfab2bbe068bd60757afcd8f3ae9d6d2a459ba6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:56:23.603442 containerd[1470]: time="2024-10-08T19:56:23.603390826Z" level=info msg="CreateContainer within sandbox \"9025b164e6dcad414da8573bdbfab2bbe068bd60757afcd8f3ae9d6d2a459ba6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3bc8b1e70b0fc73c173b0d9ee096ce32f896ee432bf94865a5e6704392ec3f44\"" Oct 8 19:56:23.606169 containerd[1470]: time="2024-10-08T19:56:23.604298492Z" level=info msg="StartContainer for \"3bc8b1e70b0fc73c173b0d9ee096ce32f896ee432bf94865a5e6704392ec3f44\"" Oct 8 19:56:23.673576 systemd[1]: Started cri-containerd-3bc8b1e70b0fc73c173b0d9ee096ce32f896ee432bf94865a5e6704392ec3f44.scope - libcontainer container 3bc8b1e70b0fc73c173b0d9ee096ce32f896ee432bf94865a5e6704392ec3f44. Oct 8 19:56:23.719610 containerd[1470]: time="2024-10-08T19:56:23.719568393Z" level=info msg="StartContainer for \"3bc8b1e70b0fc73c173b0d9ee096ce32f896ee432bf94865a5e6704392ec3f44\" returns successfully" Oct 8 19:56:24.372626 kubelet[2599]: E1008 19:56:24.372589 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:24.379834 kubelet[2599]: I1008 19:56:24.379795 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5898457fd7-sd9wz" podStartSLOduration=1.673990581 podStartE2EDuration="5.379761078s" podCreationTimestamp="2024-10-08 19:56:19 +0000 UTC" firstStartedPulling="2024-10-08 19:56:19.87226161 +0000 UTC m=+21.659968745" lastFinishedPulling="2024-10-08 19:56:23.578032107 +0000 UTC m=+25.365739242" observedRunningTime="2024-10-08 19:56:24.379386563 +0000 UTC m=+26.167093698" watchObservedRunningTime="2024-10-08 19:56:24.379761078 +0000 UTC m=+26.167468213" Oct 8 19:56:24.451817 kubelet[2599]: E1008 19:56:24.451776 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.451817 kubelet[2599]: W1008 19:56:24.451807 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.451993 kubelet[2599]: E1008 19:56:24.451835 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.452135 kubelet[2599]: E1008 19:56:24.452118 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.452135 kubelet[2599]: W1008 19:56:24.452133 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.452224 kubelet[2599]: E1008 19:56:24.452150 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.452377 kubelet[2599]: E1008 19:56:24.452360 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.452377 kubelet[2599]: W1008 19:56:24.452375 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.452465 kubelet[2599]: E1008 19:56:24.452391 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.452624 kubelet[2599]: E1008 19:56:24.452606 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.452624 kubelet[2599]: W1008 19:56:24.452619 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.452699 kubelet[2599]: E1008 19:56:24.452631 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.452831 kubelet[2599]: E1008 19:56:24.452805 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.452831 kubelet[2599]: W1008 19:56:24.452814 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.452831 kubelet[2599]: E1008 19:56:24.452824 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.453005 kubelet[2599]: E1008 19:56:24.452982 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.453005 kubelet[2599]: W1008 19:56:24.452992 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.453005 kubelet[2599]: E1008 19:56:24.453001 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.453349 kubelet[2599]: E1008 19:56:24.453165 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.453349 kubelet[2599]: W1008 19:56:24.453171 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.453349 kubelet[2599]: E1008 19:56:24.453180 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.453461 kubelet[2599]: E1008 19:56:24.453374 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.453461 kubelet[2599]: W1008 19:56:24.453385 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.453461 kubelet[2599]: E1008 19:56:24.453400 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.453700 kubelet[2599]: E1008 19:56:24.453673 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.453700 kubelet[2599]: W1008 19:56:24.453688 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.453797 kubelet[2599]: E1008 19:56:24.453706 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.453985 kubelet[2599]: E1008 19:56:24.453959 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.454033 kubelet[2599]: W1008 19:56:24.453983 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.454033 kubelet[2599]: E1008 19:56:24.454011 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.454301 kubelet[2599]: E1008 19:56:24.454286 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.454301 kubelet[2599]: W1008 19:56:24.454296 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.454386 kubelet[2599]: E1008 19:56:24.454307 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.454539 kubelet[2599]: E1008 19:56:24.454521 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.454539 kubelet[2599]: W1008 19:56:24.454533 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.454539 kubelet[2599]: E1008 19:56:24.454544 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.454759 kubelet[2599]: E1008 19:56:24.454743 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.454759 kubelet[2599]: W1008 19:56:24.454754 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.454845 kubelet[2599]: E1008 19:56:24.454764 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.454957 kubelet[2599]: E1008 19:56:24.454943 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.454957 kubelet[2599]: W1008 19:56:24.454952 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.455020 kubelet[2599]: E1008 19:56:24.454962 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.455162 kubelet[2599]: E1008 19:56:24.455148 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.455162 kubelet[2599]: W1008 19:56:24.455158 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.455246 kubelet[2599]: E1008 19:56:24.455168 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.460421 kubelet[2599]: E1008 19:56:24.460403 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.460421 kubelet[2599]: W1008 19:56:24.460414 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.460509 kubelet[2599]: E1008 19:56:24.460426 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.460673 kubelet[2599]: E1008 19:56:24.460638 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.460673 kubelet[2599]: W1008 19:56:24.460647 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.460673 kubelet[2599]: E1008 19:56:24.460662 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.460903 kubelet[2599]: E1008 19:56:24.460886 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.460903 kubelet[2599]: W1008 19:56:24.460900 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.460977 kubelet[2599]: E1008 19:56:24.460921 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.461139 kubelet[2599]: E1008 19:56:24.461125 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.461139 kubelet[2599]: W1008 19:56:24.461136 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.461238 kubelet[2599]: E1008 19:56:24.461154 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.461356 kubelet[2599]: E1008 19:56:24.461341 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.461356 kubelet[2599]: W1008 19:56:24.461352 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.461427 kubelet[2599]: E1008 19:56:24.461372 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.461605 kubelet[2599]: E1008 19:56:24.461587 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.461605 kubelet[2599]: W1008 19:56:24.461600 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.461712 kubelet[2599]: E1008 19:56:24.461621 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.461894 kubelet[2599]: E1008 19:56:24.461874 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.461894 kubelet[2599]: W1008 19:56:24.461889 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.461970 kubelet[2599]: E1008 19:56:24.461902 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.462123 kubelet[2599]: E1008 19:56:24.462106 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.462123 kubelet[2599]: W1008 19:56:24.462120 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.462216 kubelet[2599]: E1008 19:56:24.462142 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.462367 kubelet[2599]: E1008 19:56:24.462344 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.462367 kubelet[2599]: W1008 19:56:24.462356 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.462441 kubelet[2599]: E1008 19:56:24.462374 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.462572 kubelet[2599]: E1008 19:56:24.462555 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.462572 kubelet[2599]: W1008 19:56:24.462568 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.462666 kubelet[2599]: E1008 19:56:24.462593 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.462798 kubelet[2599]: E1008 19:56:24.462784 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.462798 kubelet[2599]: W1008 19:56:24.462794 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.462865 kubelet[2599]: E1008 19:56:24.462812 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.463009 kubelet[2599]: E1008 19:56:24.462995 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.463009 kubelet[2599]: W1008 19:56:24.463005 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.463082 kubelet[2599]: E1008 19:56:24.463021 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.463309 kubelet[2599]: E1008 19:56:24.463292 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.463309 kubelet[2599]: W1008 19:56:24.463303 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.463382 kubelet[2599]: E1008 19:56:24.463315 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.463508 kubelet[2599]: E1008 19:56:24.463495 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.463508 kubelet[2599]: W1008 19:56:24.463504 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.463580 kubelet[2599]: E1008 19:56:24.463516 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.463712 kubelet[2599]: E1008 19:56:24.463700 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.463712 kubelet[2599]: W1008 19:56:24.463708 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.463773 kubelet[2599]: E1008 19:56:24.463717 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.463927 kubelet[2599]: E1008 19:56:24.463916 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.463927 kubelet[2599]: W1008 19:56:24.463924 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.464119 kubelet[2599]: E1008 19:56:24.463933 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.464264 kubelet[2599]: E1008 19:56:24.464252 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.464264 kubelet[2599]: W1008 19:56:24.464261 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.464337 kubelet[2599]: E1008 19:56:24.464271 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:24.465256 kubelet[2599]: E1008 19:56:24.465232 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:24.465256 kubelet[2599]: W1008 19:56:24.465244 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:24.465256 kubelet[2599]: E1008 19:56:24.465257 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.314580 kubelet[2599]: E1008 19:56:25.314541 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:25.373472 kubelet[2599]: I1008 19:56:25.373440 2599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:56:25.374081 kubelet[2599]: E1008 19:56:25.373964 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:25.461970 kubelet[2599]: E1008 19:56:25.461932 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.461970 kubelet[2599]: W1008 19:56:25.461954 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.461970 kubelet[2599]: E1008 19:56:25.461976 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.462227 kubelet[2599]: E1008 19:56:25.462212 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.462227 kubelet[2599]: W1008 19:56:25.462221 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.462276 kubelet[2599]: E1008 19:56:25.462233 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.462442 kubelet[2599]: E1008 19:56:25.462428 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.462442 kubelet[2599]: W1008 19:56:25.462441 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.462527 kubelet[2599]: E1008 19:56:25.462452 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.462674 kubelet[2599]: E1008 19:56:25.462663 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.462674 kubelet[2599]: W1008 19:56:25.462672 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.462732 kubelet[2599]: E1008 19:56:25.462681 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.462879 kubelet[2599]: E1008 19:56:25.462869 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.462879 kubelet[2599]: W1008 19:56:25.462877 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.462944 kubelet[2599]: E1008 19:56:25.462886 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.463062 kubelet[2599]: E1008 19:56:25.463052 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.463062 kubelet[2599]: W1008 19:56:25.463060 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.463126 kubelet[2599]: E1008 19:56:25.463069 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.463270 kubelet[2599]: E1008 19:56:25.463260 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.463270 kubelet[2599]: W1008 19:56:25.463268 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.463327 kubelet[2599]: E1008 19:56:25.463277 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.463484 kubelet[2599]: E1008 19:56:25.463469 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.463484 kubelet[2599]: W1008 19:56:25.463479 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.463549 kubelet[2599]: E1008 19:56:25.463490 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.463674 kubelet[2599]: E1008 19:56:25.463663 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.463710 kubelet[2599]: W1008 19:56:25.463673 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.463710 kubelet[2599]: E1008 19:56:25.463684 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.463868 kubelet[2599]: E1008 19:56:25.463859 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.463868 kubelet[2599]: W1008 19:56:25.463867 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.463924 kubelet[2599]: E1008 19:56:25.463876 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.464055 kubelet[2599]: E1008 19:56:25.464045 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.464055 kubelet[2599]: W1008 19:56:25.464053 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.464134 kubelet[2599]: E1008 19:56:25.464062 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.464259 kubelet[2599]: E1008 19:56:25.464249 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.464259 kubelet[2599]: W1008 19:56:25.464257 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.464315 kubelet[2599]: E1008 19:56:25.464267 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.464450 kubelet[2599]: E1008 19:56:25.464440 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.464450 kubelet[2599]: W1008 19:56:25.464448 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.464510 kubelet[2599]: E1008 19:56:25.464458 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.464631 kubelet[2599]: E1008 19:56:25.464621 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.464631 kubelet[2599]: W1008 19:56:25.464629 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.464681 kubelet[2599]: E1008 19:56:25.464638 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.464803 kubelet[2599]: E1008 19:56:25.464793 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.464803 kubelet[2599]: W1008 19:56:25.464801 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.464852 kubelet[2599]: E1008 19:56:25.464809 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.467098 kubelet[2599]: E1008 19:56:25.467077 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.467146 kubelet[2599]: W1008 19:56:25.467100 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.467146 kubelet[2599]: E1008 19:56:25.467112 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.467346 kubelet[2599]: E1008 19:56:25.467330 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.467346 kubelet[2599]: W1008 19:56:25.467341 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.467406 kubelet[2599]: E1008 19:56:25.467356 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.467568 kubelet[2599]: E1008 19:56:25.467555 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.467568 kubelet[2599]: W1008 19:56:25.467566 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.467634 kubelet[2599]: E1008 19:56:25.467580 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.467784 kubelet[2599]: E1008 19:56:25.467773 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.467784 kubelet[2599]: W1008 19:56:25.467781 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.467847 kubelet[2599]: E1008 19:56:25.467796 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.467987 kubelet[2599]: E1008 19:56:25.467971 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.467987 kubelet[2599]: W1008 19:56:25.467980 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.468038 kubelet[2599]: E1008 19:56:25.467993 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.468202 kubelet[2599]: E1008 19:56:25.468191 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.468202 kubelet[2599]: W1008 19:56:25.468199 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.468263 kubelet[2599]: E1008 19:56:25.468213 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.468421 kubelet[2599]: E1008 19:56:25.468404 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.468421 kubelet[2599]: W1008 19:56:25.468413 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.468502 kubelet[2599]: E1008 19:56:25.468461 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.468600 kubelet[2599]: E1008 19:56:25.468587 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.468600 kubelet[2599]: W1008 19:56:25.468598 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.468660 kubelet[2599]: E1008 19:56:25.468624 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.468767 kubelet[2599]: E1008 19:56:25.468757 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.468767 kubelet[2599]: W1008 19:56:25.468766 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.468812 kubelet[2599]: E1008 19:56:25.468780 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.468953 kubelet[2599]: E1008 19:56:25.468943 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.468953 kubelet[2599]: W1008 19:56:25.468951 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.469059 kubelet[2599]: E1008 19:56:25.468964 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.469259 kubelet[2599]: E1008 19:56:25.469246 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.469294 kubelet[2599]: W1008 19:56:25.469258 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.469294 kubelet[2599]: E1008 19:56:25.469285 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.469481 kubelet[2599]: E1008 19:56:25.469454 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.469481 kubelet[2599]: W1008 19:56:25.469475 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.469550 kubelet[2599]: E1008 19:56:25.469494 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.469772 kubelet[2599]: E1008 19:56:25.469748 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.469772 kubelet[2599]: W1008 19:56:25.469767 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.469823 kubelet[2599]: E1008 19:56:25.469791 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.470029 kubelet[2599]: E1008 19:56:25.470013 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.470029 kubelet[2599]: W1008 19:56:25.470027 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.470118 kubelet[2599]: E1008 19:56:25.470048 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.470329 kubelet[2599]: E1008 19:56:25.470314 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.470329 kubelet[2599]: W1008 19:56:25.470327 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.470382 kubelet[2599]: E1008 19:56:25.470347 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.470605 kubelet[2599]: E1008 19:56:25.470593 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.470605 kubelet[2599]: W1008 19:56:25.470603 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.470666 kubelet[2599]: E1008 19:56:25.470618 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.470919 kubelet[2599]: E1008 19:56:25.470902 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.470947 kubelet[2599]: W1008 19:56:25.470917 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.470972 kubelet[2599]: E1008 19:56:25.470948 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:25.471234 kubelet[2599]: E1008 19:56:25.471217 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:25.471277 kubelet[2599]: W1008 19:56:25.471231 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:25.471277 kubelet[2599]: E1008 19:56:25.471258 2599 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:26.738821 containerd[1470]: time="2024-10-08T19:56:26.738763332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:26.791901 containerd[1470]: time="2024-10-08T19:56:26.791846855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 19:56:26.842995 containerd[1470]: time="2024-10-08T19:56:26.842943924Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:26.884427 containerd[1470]: time="2024-10-08T19:56:26.884354519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:26.885219 containerd[1470]: time="2024-10-08T19:56:26.885172918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 3.306872748s" Oct 8 19:56:26.885273 containerd[1470]: time="2024-10-08T19:56:26.885218323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 19:56:26.886771 containerd[1470]: time="2024-10-08T19:56:26.886745343Z" level=info msg="CreateContainer within sandbox \"44acd38727426056785f80a137eaf7060a2e5ae899b46ca1ad51dcf53d836a3d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:56:27.315213 kubelet[2599]: E1008 19:56:27.315181 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:27.400364 containerd[1470]: time="2024-10-08T19:56:27.400328898Z" level=info msg="CreateContainer within sandbox \"44acd38727426056785f80a137eaf7060a2e5ae899b46ca1ad51dcf53d836a3d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8867dfec026b5b931311f234bf304b19d195f2205bacdeaba8cb2d36ba8f5b34\"" Oct 8 19:56:27.400678 containerd[1470]: time="2024-10-08T19:56:27.400643389Z" level=info msg="StartContainer for \"8867dfec026b5b931311f234bf304b19d195f2205bacdeaba8cb2d36ba8f5b34\"" Oct 8 19:56:27.432252 systemd[1]: Started cri-containerd-8867dfec026b5b931311f234bf304b19d195f2205bacdeaba8cb2d36ba8f5b34.scope - libcontainer container 8867dfec026b5b931311f234bf304b19d195f2205bacdeaba8cb2d36ba8f5b34. Oct 8 19:56:27.475543 systemd[1]: cri-containerd-8867dfec026b5b931311f234bf304b19d195f2205bacdeaba8cb2d36ba8f5b34.scope: Deactivated successfully. Oct 8 19:56:27.583409 containerd[1470]: time="2024-10-08T19:56:27.583284501Z" level=info msg="StartContainer for \"8867dfec026b5b931311f234bf304b19d195f2205bacdeaba8cb2d36ba8f5b34\" returns successfully" Oct 8 19:56:27.603450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8867dfec026b5b931311f234bf304b19d195f2205bacdeaba8cb2d36ba8f5b34-rootfs.mount: Deactivated successfully. Oct 8 19:56:28.072936 containerd[1470]: time="2024-10-08T19:56:28.070498090Z" level=info msg="shim disconnected" id=8867dfec026b5b931311f234bf304b19d195f2205bacdeaba8cb2d36ba8f5b34 namespace=k8s.io Oct 8 19:56:28.072936 containerd[1470]: time="2024-10-08T19:56:28.072912035Z" level=warning msg="cleaning up after shim disconnected" id=8867dfec026b5b931311f234bf304b19d195f2205bacdeaba8cb2d36ba8f5b34 namespace=k8s.io Oct 8 19:56:28.072936 containerd[1470]: time="2024-10-08T19:56:28.072928996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:56:28.378734 kubelet[2599]: E1008 19:56:28.378599 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:28.379605 containerd[1470]: time="2024-10-08T19:56:28.379566909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:56:29.315290 kubelet[2599]: E1008 19:56:29.315245 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:31.314928 kubelet[2599]: E1008 19:56:31.314864 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:31.616471 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:46056.service - OpenSSH per-connection server daemon (10.0.0.1:46056). Oct 8 19:56:31.697571 sshd[3310]: Accepted publickey for core from 10.0.0.1 port 46056 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:56:31.699526 sshd[3310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:56:31.716290 systemd-logind[1457]: New session 8 of user core. Oct 8 19:56:31.723287 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:56:32.419848 kubelet[2599]: E1008 19:56:32.419656 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:32.423368 kubelet[2599]: I1008 19:56:32.423334 2599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:56:32.424056 sshd[3310]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:32.424855 kubelet[2599]: E1008 19:56:32.424273 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:32.427640 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:46056.service: Deactivated successfully. Oct 8 19:56:32.429628 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:56:32.431442 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:56:32.432599 systemd-logind[1457]: Removed session 8. Oct 8 19:56:32.689586 containerd[1470]: time="2024-10-08T19:56:32.689459999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:32.710743 containerd[1470]: time="2024-10-08T19:56:32.710663972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 19:56:32.725642 containerd[1470]: time="2024-10-08T19:56:32.725571959Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:32.742829 containerd[1470]: time="2024-10-08T19:56:32.742767344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:32.743548 containerd[1470]: time="2024-10-08T19:56:32.743518545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.363917762s" Oct 8 19:56:32.743600 containerd[1470]: time="2024-10-08T19:56:32.743559742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 19:56:32.747777 containerd[1470]: time="2024-10-08T19:56:32.747738980Z" level=info msg="CreateContainer within sandbox \"44acd38727426056785f80a137eaf7060a2e5ae899b46ca1ad51dcf53d836a3d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:56:32.793420 containerd[1470]: time="2024-10-08T19:56:32.793354510Z" level=info msg="CreateContainer within sandbox \"44acd38727426056785f80a137eaf7060a2e5ae899b46ca1ad51dcf53d836a3d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2af9df0b5caf0a4579c7fc5012aa99429568affa03cf96fedf361da88389dd0d\"" Oct 8 19:56:32.794025 containerd[1470]: time="2024-10-08T19:56:32.793963323Z" level=info msg="StartContainer for \"2af9df0b5caf0a4579c7fc5012aa99429568affa03cf96fedf361da88389dd0d\"" Oct 8 19:56:32.824227 systemd[1]: Started cri-containerd-2af9df0b5caf0a4579c7fc5012aa99429568affa03cf96fedf361da88389dd0d.scope - libcontainer container 2af9df0b5caf0a4579c7fc5012aa99429568affa03cf96fedf361da88389dd0d. Oct 8 19:56:33.026137 containerd[1470]: time="2024-10-08T19:56:33.025927494Z" level=info msg="StartContainer for \"2af9df0b5caf0a4579c7fc5012aa99429568affa03cf96fedf361da88389dd0d\" returns successfully" Oct 8 19:56:33.419741 kubelet[2599]: E1008 19:56:33.419707 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:33.420082 kubelet[2599]: E1008 19:56:33.420011 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:34.315716 kubelet[2599]: E1008 19:56:34.315319 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:34.421200 kubelet[2599]: E1008 19:56:34.421175 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:34.607975 containerd[1470]: time="2024-10-08T19:56:34.607615981Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:56:34.610749 systemd[1]: cri-containerd-2af9df0b5caf0a4579c7fc5012aa99429568affa03cf96fedf361da88389dd0d.scope: Deactivated successfully. Oct 8 19:56:34.635072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2af9df0b5caf0a4579c7fc5012aa99429568affa03cf96fedf361da88389dd0d-rootfs.mount: Deactivated successfully. Oct 8 19:56:34.673685 kubelet[2599]: I1008 19:56:34.673332 2599 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:56:34.866354 kubelet[2599]: I1008 19:56:34.865134 2599 topology_manager.go:215] "Topology Admit Handler" podUID="33255908-c3af-4190-82ef-2570669d0a40" podNamespace="kube-system" podName="coredns-76f75df574-f8pfw" Oct 8 19:56:34.869474 kubelet[2599]: I1008 19:56:34.868980 2599 topology_manager.go:215] "Topology Admit Handler" podUID="8c82dc71-2684-462e-a969-004022bf30fa" podNamespace="calico-system" podName="calico-kube-controllers-669d4dfd7d-jrb8g" Oct 8 19:56:34.869474 kubelet[2599]: I1008 19:56:34.869180 2599 topology_manager.go:215] "Topology Admit Handler" podUID="a12bd77c-02fe-483c-91fa-839433fca8f9" podNamespace="kube-system" podName="coredns-76f75df574-gsz6t" Oct 8 19:56:34.872983 systemd[1]: Created slice kubepods-burstable-pod33255908_c3af_4190_82ef_2570669d0a40.slice - libcontainer container kubepods-burstable-pod33255908_c3af_4190_82ef_2570669d0a40.slice. Oct 8 19:56:34.880808 systemd[1]: Created slice kubepods-burstable-poda12bd77c_02fe_483c_91fa_839433fca8f9.slice - libcontainer container kubepods-burstable-poda12bd77c_02fe_483c_91fa_839433fca8f9.slice. Oct 8 19:56:34.885583 systemd[1]: Created slice kubepods-besteffort-pod8c82dc71_2684_462e_a969_004022bf30fa.slice - libcontainer container kubepods-besteffort-pod8c82dc71_2684_462e_a969_004022bf30fa.slice. Oct 8 19:56:34.947377 containerd[1470]: time="2024-10-08T19:56:34.947282297Z" level=info msg="shim disconnected" id=2af9df0b5caf0a4579c7fc5012aa99429568affa03cf96fedf361da88389dd0d namespace=k8s.io Oct 8 19:56:34.947377 containerd[1470]: time="2024-10-08T19:56:34.947348311Z" level=warning msg="cleaning up after shim disconnected" id=2af9df0b5caf0a4579c7fc5012aa99429568affa03cf96fedf361da88389dd0d namespace=k8s.io Oct 8 19:56:34.947377 containerd[1470]: time="2024-10-08T19:56:34.947360694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:56:35.023868 kubelet[2599]: I1008 19:56:35.023778 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pp7d\" (UniqueName: \"kubernetes.io/projected/a12bd77c-02fe-483c-91fa-839433fca8f9-kube-api-access-6pp7d\") pod \"coredns-76f75df574-gsz6t\" (UID: \"a12bd77c-02fe-483c-91fa-839433fca8f9\") " pod="kube-system/coredns-76f75df574-gsz6t" Oct 8 19:56:35.023868 kubelet[2599]: I1008 19:56:35.023829 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33255908-c3af-4190-82ef-2570669d0a40-config-volume\") pod \"coredns-76f75df574-f8pfw\" (UID: \"33255908-c3af-4190-82ef-2570669d0a40\") " pod="kube-system/coredns-76f75df574-f8pfw" Oct 8 19:56:35.023868 kubelet[2599]: I1008 19:56:35.023853 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nshsk\" (UniqueName: \"kubernetes.io/projected/33255908-c3af-4190-82ef-2570669d0a40-kube-api-access-nshsk\") pod \"coredns-76f75df574-f8pfw\" (UID: \"33255908-c3af-4190-82ef-2570669d0a40\") " pod="kube-system/coredns-76f75df574-f8pfw" Oct 8 19:56:35.023868 kubelet[2599]: I1008 19:56:35.023882 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvvn6\" (UniqueName: \"kubernetes.io/projected/8c82dc71-2684-462e-a969-004022bf30fa-kube-api-access-tvvn6\") pod \"calico-kube-controllers-669d4dfd7d-jrb8g\" (UID: \"8c82dc71-2684-462e-a969-004022bf30fa\") " pod="calico-system/calico-kube-controllers-669d4dfd7d-jrb8g" Oct 8 19:56:35.024191 kubelet[2599]: I1008 19:56:35.023971 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c82dc71-2684-462e-a969-004022bf30fa-tigera-ca-bundle\") pod \"calico-kube-controllers-669d4dfd7d-jrb8g\" (UID: \"8c82dc71-2684-462e-a969-004022bf30fa\") " pod="calico-system/calico-kube-controllers-669d4dfd7d-jrb8g" Oct 8 19:56:35.024191 kubelet[2599]: I1008 19:56:35.024050 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a12bd77c-02fe-483c-91fa-839433fca8f9-config-volume\") pod \"coredns-76f75df574-gsz6t\" (UID: \"a12bd77c-02fe-483c-91fa-839433fca8f9\") " pod="kube-system/coredns-76f75df574-gsz6t" Oct 8 19:56:35.178461 kubelet[2599]: E1008 19:56:35.178398 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:35.179154 containerd[1470]: time="2024-10-08T19:56:35.179073664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f8pfw,Uid:33255908-c3af-4190-82ef-2570669d0a40,Namespace:kube-system,Attempt:0,}" Oct 8 19:56:35.183722 kubelet[2599]: E1008 19:56:35.183645 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:35.184128 containerd[1470]: time="2024-10-08T19:56:35.183997919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gsz6t,Uid:a12bd77c-02fe-483c-91fa-839433fca8f9,Namespace:kube-system,Attempt:0,}" Oct 8 19:56:35.188515 containerd[1470]: time="2024-10-08T19:56:35.188474394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669d4dfd7d-jrb8g,Uid:8c82dc71-2684-462e-a969-004022bf30fa,Namespace:calico-system,Attempt:0,}" Oct 8 19:56:35.292463 containerd[1470]: time="2024-10-08T19:56:35.292408811Z" level=error msg="Failed to destroy network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.293177 containerd[1470]: time="2024-10-08T19:56:35.293132761Z" level=error msg="encountered an error cleaning up failed sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.293331 containerd[1470]: time="2024-10-08T19:56:35.293193104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669d4dfd7d-jrb8g,Uid:8c82dc71-2684-462e-a969-004022bf30fa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.293608 kubelet[2599]: E1008 19:56:35.293571 2599 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.293814 kubelet[2599]: E1008 19:56:35.293635 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669d4dfd7d-jrb8g" Oct 8 19:56:35.293814 kubelet[2599]: E1008 19:56:35.293656 2599 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669d4dfd7d-jrb8g" Oct 8 19:56:35.293814 kubelet[2599]: E1008 19:56:35.293714 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-669d4dfd7d-jrb8g_calico-system(8c82dc71-2684-462e-a969-004022bf30fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-669d4dfd7d-jrb8g_calico-system(8c82dc71-2684-462e-a969-004022bf30fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669d4dfd7d-jrb8g" podUID="8c82dc71-2684-462e-a969-004022bf30fa" Oct 8 19:56:35.294588 containerd[1470]: time="2024-10-08T19:56:35.294543389Z" level=error msg="Failed to destroy network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.294985 containerd[1470]: time="2024-10-08T19:56:35.294953359Z" level=error msg="encountered an error cleaning up failed sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.295032 containerd[1470]: time="2024-10-08T19:56:35.295010406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f8pfw,Uid:33255908-c3af-4190-82ef-2570669d0a40,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.295340 kubelet[2599]: E1008 19:56:35.295310 2599 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.295396 kubelet[2599]: E1008 19:56:35.295368 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f8pfw" Oct 8 19:56:35.295396 kubelet[2599]: E1008 19:56:35.295395 2599 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f8pfw" Oct 8 19:56:35.295469 kubelet[2599]: E1008 19:56:35.295454 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-f8pfw_kube-system(33255908-c3af-4190-82ef-2570669d0a40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-f8pfw_kube-system(33255908-c3af-4190-82ef-2570669d0a40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f8pfw" podUID="33255908-c3af-4190-82ef-2570669d0a40" Oct 8 19:56:35.301060 containerd[1470]: time="2024-10-08T19:56:35.301006564Z" level=error msg="Failed to destroy network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.301550 containerd[1470]: time="2024-10-08T19:56:35.301505742Z" level=error msg="encountered an error cleaning up failed sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.301616 containerd[1470]: time="2024-10-08T19:56:35.301558972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gsz6t,Uid:a12bd77c-02fe-483c-91fa-839433fca8f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.301773 kubelet[2599]: E1008 19:56:35.301742 2599 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.301838 kubelet[2599]: E1008 19:56:35.301782 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gsz6t" Oct 8 19:56:35.301838 kubelet[2599]: E1008 19:56:35.301801 2599 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gsz6t" Oct 8 19:56:35.301890 kubelet[2599]: E1008 19:56:35.301842 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gsz6t_kube-system(a12bd77c-02fe-483c-91fa-839433fca8f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gsz6t_kube-system(a12bd77c-02fe-483c-91fa-839433fca8f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gsz6t" podUID="a12bd77c-02fe-483c-91fa-839433fca8f9" Oct 8 19:56:35.423921 kubelet[2599]: I1008 19:56:35.423875 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:35.424748 containerd[1470]: time="2024-10-08T19:56:35.424694770Z" level=info msg="StopPodSandbox for \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\"" Oct 8 19:56:35.424821 kubelet[2599]: I1008 19:56:35.424788 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:35.424895 containerd[1470]: time="2024-10-08T19:56:35.424876291Z" level=info msg="Ensure that sandbox 503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971 in task-service has been cleanup successfully" Oct 8 19:56:35.425834 containerd[1470]: time="2024-10-08T19:56:35.425379666Z" level=info msg="StopPodSandbox for \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\"" Oct 8 19:56:35.425834 containerd[1470]: time="2024-10-08T19:56:35.425568921Z" level=info msg="Ensure that sandbox 437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5 in task-service has been cleanup successfully" Oct 8 19:56:35.426169 kubelet[2599]: I1008 19:56:35.426146 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:35.426567 containerd[1470]: time="2024-10-08T19:56:35.426524525Z" level=info msg="StopPodSandbox for \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\"" Oct 8 19:56:35.426709 containerd[1470]: time="2024-10-08T19:56:35.426687191Z" level=info msg="Ensure that sandbox b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43 in task-service has been cleanup successfully" Oct 8 19:56:35.430884 kubelet[2599]: E1008 19:56:35.430691 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:35.438566 containerd[1470]: time="2024-10-08T19:56:35.435808848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:56:35.471511 containerd[1470]: time="2024-10-08T19:56:35.471397359Z" level=error msg="StopPodSandbox for \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\" failed" error="failed to destroy network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.471686 kubelet[2599]: E1008 19:56:35.471655 2599 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:35.471862 containerd[1470]: time="2024-10-08T19:56:35.471819591Z" level=error msg="StopPodSandbox for \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\" failed" error="failed to destroy network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.472099 kubelet[2599]: E1008 19:56:35.472064 2599 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5"} Oct 8 19:56:35.472149 kubelet[2599]: E1008 19:56:35.472128 2599 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c82dc71-2684-462e-a969-004022bf30fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:56:35.472210 kubelet[2599]: E1008 19:56:35.472158 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c82dc71-2684-462e-a969-004022bf30fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669d4dfd7d-jrb8g" podUID="8c82dc71-2684-462e-a969-004022bf30fa" Oct 8 19:56:35.472436 kubelet[2599]: E1008 19:56:35.472410 2599 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:35.472436 kubelet[2599]: E1008 19:56:35.472462 2599 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43"} Oct 8 19:56:35.472535 kubelet[2599]: E1008 19:56:35.472497 2599 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33255908-c3af-4190-82ef-2570669d0a40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:56:35.472535 kubelet[2599]: E1008 19:56:35.472527 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33255908-c3af-4190-82ef-2570669d0a40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f8pfw" podUID="33255908-c3af-4190-82ef-2570669d0a40" Oct 8 19:56:35.473096 containerd[1470]: time="2024-10-08T19:56:35.473032087Z" level=error msg="StopPodSandbox for \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\" failed" error="failed to destroy network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:35.473220 kubelet[2599]: E1008 19:56:35.473203 2599 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:35.473273 kubelet[2599]: E1008 19:56:35.473229 2599 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971"} Oct 8 19:56:35.473273 kubelet[2599]: E1008 19:56:35.473256 2599 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a12bd77c-02fe-483c-91fa-839433fca8f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:56:35.473338 kubelet[2599]: E1008 19:56:35.473284 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a12bd77c-02fe-483c-91fa-839433fca8f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gsz6t" podUID="a12bd77c-02fe-483c-91fa-839433fca8f9" Oct 8 19:56:35.635808 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43-shm.mount: Deactivated successfully. Oct 8 19:56:36.320351 systemd[1]: Created slice kubepods-besteffort-pod36b7e211_8774_47a4_847d_9ea19c0b13c3.slice - libcontainer container kubepods-besteffort-pod36b7e211_8774_47a4_847d_9ea19c0b13c3.slice. Oct 8 19:56:36.322704 containerd[1470]: time="2024-10-08T19:56:36.322660889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2n5q,Uid:36b7e211-8774-47a4-847d-9ea19c0b13c3,Namespace:calico-system,Attempt:0,}" Oct 8 19:56:36.380831 containerd[1470]: time="2024-10-08T19:56:36.380767221Z" level=error msg="Failed to destroy network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:36.381256 containerd[1470]: time="2024-10-08T19:56:36.381231242Z" level=error msg="encountered an error cleaning up failed sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:36.381309 containerd[1470]: time="2024-10-08T19:56:36.381287007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2n5q,Uid:36b7e211-8774-47a4-847d-9ea19c0b13c3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:36.382017 kubelet[2599]: E1008 19:56:36.381540 2599 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:36.382017 kubelet[2599]: E1008 19:56:36.381599 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g2n5q" Oct 8 19:56:36.382017 kubelet[2599]: E1008 19:56:36.381619 2599 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g2n5q" Oct 8 19:56:36.382167 kubelet[2599]: E1008 19:56:36.381678 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g2n5q_calico-system(36b7e211-8774-47a4-847d-9ea19c0b13c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g2n5q_calico-system(36b7e211-8774-47a4-847d-9ea19c0b13c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:36.383704 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857-shm.mount: Deactivated successfully. Oct 8 19:56:36.431907 kubelet[2599]: I1008 19:56:36.431870 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:36.432851 containerd[1470]: time="2024-10-08T19:56:36.432467015Z" level=info msg="StopPodSandbox for \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\"" Oct 8 19:56:36.432851 containerd[1470]: time="2024-10-08T19:56:36.432623338Z" level=info msg="Ensure that sandbox 61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857 in task-service has been cleanup successfully" Oct 8 19:56:36.460686 containerd[1470]: time="2024-10-08T19:56:36.460636242Z" level=error msg="StopPodSandbox for \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\" failed" error="failed to destroy network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:36.461057 kubelet[2599]: E1008 19:56:36.461028 2599 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:36.461130 kubelet[2599]: E1008 19:56:36.461083 2599 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857"} Oct 8 19:56:36.461165 kubelet[2599]: E1008 19:56:36.461143 2599 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36b7e211-8774-47a4-847d-9ea19c0b13c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:56:36.461244 kubelet[2599]: E1008 19:56:36.461203 2599 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36b7e211-8774-47a4-847d-9ea19c0b13c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g2n5q" podUID="36b7e211-8774-47a4-847d-9ea19c0b13c3" Oct 8 19:56:37.435550 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:46058.service - OpenSSH per-connection server daemon (10.0.0.1:46058). Oct 8 19:56:37.614196 sshd[3632]: Accepted publickey for core from 10.0.0.1 port 46058 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:56:37.616121 sshd[3632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:56:37.621051 systemd-logind[1457]: New session 9 of user core. Oct 8 19:56:37.626464 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:56:37.748818 sshd[3632]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:37.752574 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:56:37.755394 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:46058.service: Deactivated successfully. Oct 8 19:56:37.759141 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:56:37.760770 systemd-logind[1457]: Removed session 9. Oct 8 19:56:39.769987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288750114.mount: Deactivated successfully. Oct 8 19:56:39.964192 containerd[1470]: time="2024-10-08T19:56:39.964120998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:39.965053 containerd[1470]: time="2024-10-08T19:56:39.965010117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 19:56:39.966464 containerd[1470]: time="2024-10-08T19:56:39.966437937Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:39.969105 containerd[1470]: time="2024-10-08T19:56:39.969045472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:39.969937 containerd[1470]: time="2024-10-08T19:56:39.969879777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.534029812s" Oct 8 19:56:39.969937 containerd[1470]: time="2024-10-08T19:56:39.969934280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 19:56:39.978767 containerd[1470]: time="2024-10-08T19:56:39.978629321Z" level=info msg="CreateContainer within sandbox \"44acd38727426056785f80a137eaf7060a2e5ae899b46ca1ad51dcf53d836a3d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:56:40.025271 containerd[1470]: time="2024-10-08T19:56:40.025170791Z" level=info msg="CreateContainer within sandbox \"44acd38727426056785f80a137eaf7060a2e5ae899b46ca1ad51dcf53d836a3d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"59e732b2573982dea833c845ff884c3e1abab71c88492fbaef47daac874a6ad4\"" Oct 8 19:56:40.026079 containerd[1470]: time="2024-10-08T19:56:40.025861617Z" level=info msg="StartContainer for \"59e732b2573982dea833c845ff884c3e1abab71c88492fbaef47daac874a6ad4\"" Oct 8 19:56:40.099292 systemd[1]: Started cri-containerd-59e732b2573982dea833c845ff884c3e1abab71c88492fbaef47daac874a6ad4.scope - libcontainer container 59e732b2573982dea833c845ff884c3e1abab71c88492fbaef47daac874a6ad4. Oct 8 19:56:40.464529 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:56:40.464647 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:56:40.536900 containerd[1470]: time="2024-10-08T19:56:40.536841815Z" level=info msg="StartContainer for \"59e732b2573982dea833c845ff884c3e1abab71c88492fbaef47daac874a6ad4\" returns successfully" Oct 8 19:56:40.656348 kubelet[2599]: E1008 19:56:40.655869 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:40.664816 kubelet[2599]: I1008 19:56:40.663778 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-5b7js" podStartSLOduration=1.86802684 podStartE2EDuration="21.663732345s" podCreationTimestamp="2024-10-08 19:56:19 +0000 UTC" firstStartedPulling="2024-10-08 19:56:20.174568082 +0000 UTC m=+21.962275227" lastFinishedPulling="2024-10-08 19:56:39.970273597 +0000 UTC m=+41.757980732" observedRunningTime="2024-10-08 19:56:40.663240924 +0000 UTC m=+42.450948069" watchObservedRunningTime="2024-10-08 19:56:40.663732345 +0000 UTC m=+42.451439481" Oct 8 19:56:41.649557 kubelet[2599]: E1008 19:56:41.649526 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:42.041137 kernel: bpftool[3889]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:56:42.285442 systemd-networkd[1408]: vxlan.calico: Link UP Oct 8 19:56:42.285451 systemd-networkd[1408]: vxlan.calico: Gained carrier Oct 8 19:56:42.764501 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:35558.service - OpenSSH per-connection server daemon (10.0.0.1:35558). Oct 8 19:56:42.806465 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 35558 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:56:42.808317 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:56:42.812471 systemd-logind[1457]: New session 10 of user core. Oct 8 19:56:42.821265 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:56:42.939579 sshd[3963]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:42.944617 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:35558.service: Deactivated successfully. Oct 8 19:56:42.947243 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:56:42.947910 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:56:42.948995 systemd-logind[1457]: Removed session 10. Oct 8 19:56:43.926284 systemd-networkd[1408]: vxlan.calico: Gained IPv6LL Oct 8 19:56:47.315863 containerd[1470]: time="2024-10-08T19:56:47.315775888Z" level=info msg="StopPodSandbox for \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\"" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.387 [INFO][3999] k8s.go 608: Cleaning up netns ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.388 [INFO][3999] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" iface="eth0" netns="/var/run/netns/cni-35ed4318-3c3d-4740-e9f2-7766db7a5b70" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.388 [INFO][3999] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" iface="eth0" netns="/var/run/netns/cni-35ed4318-3c3d-4740-e9f2-7766db7a5b70" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.388 [INFO][3999] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" iface="eth0" netns="/var/run/netns/cni-35ed4318-3c3d-4740-e9f2-7766db7a5b70" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.388 [INFO][3999] k8s.go 615: Releasing IP address(es) ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.388 [INFO][3999] utils.go 188: Calico CNI releasing IP address ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.439 [INFO][4006] ipam_plugin.go 417: Releasing address using handleID ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" HandleID="k8s-pod-network.503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.440 [INFO][4006] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.440 [INFO][4006] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.446 [WARNING][4006] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" HandleID="k8s-pod-network.503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.446 [INFO][4006] ipam_plugin.go 445: Releasing address using workloadID ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" HandleID="k8s-pod-network.503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.448 [INFO][4006] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:47.453832 containerd[1470]: 2024-10-08 19:56:47.450 [INFO][3999] k8s.go 621: Teardown processing complete. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:47.454527 containerd[1470]: time="2024-10-08T19:56:47.454022924Z" level=info msg="TearDown network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\" successfully" Oct 8 19:56:47.454527 containerd[1470]: time="2024-10-08T19:56:47.454056647Z" level=info msg="StopPodSandbox for \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\" returns successfully" Oct 8 19:56:47.454591 kubelet[2599]: E1008 19:56:47.454500 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:47.455305 containerd[1470]: time="2024-10-08T19:56:47.455165127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gsz6t,Uid:a12bd77c-02fe-483c-91fa-839433fca8f9,Namespace:kube-system,Attempt:1,}" Oct 8 19:56:47.456707 systemd[1]: run-netns-cni\x2d35ed4318\x2d3c3d\x2d4740\x2de9f2\x2d7766db7a5b70.mount: Deactivated successfully. Oct 8 19:56:47.574695 systemd-networkd[1408]: cali1760974d104: Link UP Oct 8 19:56:47.575364 systemd-networkd[1408]: cali1760974d104: Gained carrier Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.503 [INFO][4016] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--gsz6t-eth0 coredns-76f75df574- kube-system a12bd77c-02fe-483c-91fa-839433fca8f9 847 0 2024-10-08 19:56:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-gsz6t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1760974d104 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Namespace="kube-system" Pod="coredns-76f75df574-gsz6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gsz6t-" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.503 [INFO][4016] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Namespace="kube-system" Pod="coredns-76f75df574-gsz6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.533 [INFO][4028] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" HandleID="k8s-pod-network.53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.541 [INFO][4028] ipam_plugin.go 270: Auto assigning IP ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" HandleID="k8s-pod-network.53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc620), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-gsz6t", "timestamp":"2024-10-08 19:56:47.533431684 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.541 [INFO][4028] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.541 [INFO][4028] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.541 [INFO][4028] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.543 [INFO][4028] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" host="localhost" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.547 [INFO][4028] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.551 [INFO][4028] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.553 [INFO][4028] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.556 [INFO][4028] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.556 [INFO][4028] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" host="localhost" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.557 [INFO][4028] ipam.go 1685: Creating new handle: k8s-pod-network.53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.563 [INFO][4028] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" host="localhost" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.568 [INFO][4028] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" host="localhost" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.568 [INFO][4028] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" host="localhost" Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.568 [INFO][4028] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:47.593127 containerd[1470]: 2024-10-08 19:56:47.568 [INFO][4028] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" HandleID="k8s-pod-network.53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.593891 containerd[1470]: 2024-10-08 19:56:47.571 [INFO][4016] k8s.go 386: Populated endpoint ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Namespace="kube-system" Pod="coredns-76f75df574-gsz6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gsz6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gsz6t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a12bd77c-02fe-483c-91fa-839433fca8f9", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-gsz6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1760974d104", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:47.593891 containerd[1470]: 2024-10-08 19:56:47.572 [INFO][4016] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Namespace="kube-system" Pod="coredns-76f75df574-gsz6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.593891 containerd[1470]: 2024-10-08 19:56:47.572 [INFO][4016] dataplane_linux.go 68: Setting the host side veth name to cali1760974d104 ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Namespace="kube-system" Pod="coredns-76f75df574-gsz6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.593891 containerd[1470]: 2024-10-08 19:56:47.574 [INFO][4016] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Namespace="kube-system" Pod="coredns-76f75df574-gsz6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.593891 containerd[1470]: 2024-10-08 19:56:47.576 [INFO][4016] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Namespace="kube-system" Pod="coredns-76f75df574-gsz6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gsz6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gsz6t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a12bd77c-02fe-483c-91fa-839433fca8f9", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b", Pod:"coredns-76f75df574-gsz6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1760974d104", MAC:"1e:cb:a0:e8:09:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:47.593891 containerd[1470]: 2024-10-08 19:56:47.586 [INFO][4016] k8s.go 500: Wrote updated endpoint to datastore ContainerID="53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b" Namespace="kube-system" Pod="coredns-76f75df574-gsz6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:47.630749 containerd[1470]: time="2024-10-08T19:56:47.630112565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:47.630749 containerd[1470]: time="2024-10-08T19:56:47.630720075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:47.631142 containerd[1470]: time="2024-10-08T19:56:47.630939968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:47.631260 containerd[1470]: time="2024-10-08T19:56:47.631048862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:47.659259 systemd[1]: Started cri-containerd-53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b.scope - libcontainer container 53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b. Oct 8 19:56:47.672023 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:56:47.706216 containerd[1470]: time="2024-10-08T19:56:47.706171482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gsz6t,Uid:a12bd77c-02fe-483c-91fa-839433fca8f9,Namespace:kube-system,Attempt:1,} returns sandbox id \"53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b\"" Oct 8 19:56:47.707417 kubelet[2599]: E1008 19:56:47.707111 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:47.709342 containerd[1470]: time="2024-10-08T19:56:47.709300392Z" level=info msg="CreateContainer within sandbox \"53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:56:47.951688 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:35562.service - OpenSSH per-connection server daemon (10.0.0.1:35562). Oct 8 19:56:48.061281 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 35562 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:56:48.063244 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:56:48.067467 systemd-logind[1457]: New session 11 of user core. Oct 8 19:56:48.088309 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:56:48.228660 sshd[4102]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:48.237331 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:35562.service: Deactivated successfully. Oct 8 19:56:48.239361 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:56:48.240050 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:56:48.248422 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:35564.service - OpenSSH per-connection server daemon (10.0.0.1:35564). Oct 8 19:56:48.249048 systemd-logind[1457]: Removed session 11. Oct 8 19:56:48.277319 containerd[1470]: time="2024-10-08T19:56:48.277248706Z" level=info msg="CreateContainer within sandbox \"53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fb4cffc7381c8194cc98cc966bbc9666c8f4d13836a1ac1e9188021e8337dbc9\"" Oct 8 19:56:48.277943 containerd[1470]: time="2024-10-08T19:56:48.277907021Z" level=info msg="StartContainer for \"fb4cffc7381c8194cc98cc966bbc9666c8f4d13836a1ac1e9188021e8337dbc9\"" Oct 8 19:56:48.280133 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 35564 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:56:48.282280 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:56:48.287918 systemd-logind[1457]: New session 12 of user core. Oct 8 19:56:48.294317 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:56:48.315266 systemd[1]: Started cri-containerd-fb4cffc7381c8194cc98cc966bbc9666c8f4d13836a1ac1e9188021e8337dbc9.scope - libcontainer container fb4cffc7381c8194cc98cc966bbc9666c8f4d13836a1ac1e9188021e8337dbc9. Oct 8 19:56:48.322229 containerd[1470]: time="2024-10-08T19:56:48.322193266Z" level=info msg="StopPodSandbox for \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\"" Oct 8 19:56:48.442134 containerd[1470]: time="2024-10-08T19:56:48.439803792Z" level=info msg="StartContainer for \"fb4cffc7381c8194cc98cc966bbc9666c8f4d13836a1ac1e9188021e8337dbc9\" returns successfully" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.408 [INFO][4162] k8s.go 608: Cleaning up netns ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.409 [INFO][4162] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" iface="eth0" netns="/var/run/netns/cni-56ae3b2d-b203-349b-943c-b48fdb30ab47" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.409 [INFO][4162] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" iface="eth0" netns="/var/run/netns/cni-56ae3b2d-b203-349b-943c-b48fdb30ab47" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.410 [INFO][4162] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" iface="eth0" netns="/var/run/netns/cni-56ae3b2d-b203-349b-943c-b48fdb30ab47" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.410 [INFO][4162] k8s.go 615: Releasing IP address(es) ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.410 [INFO][4162] utils.go 188: Calico CNI releasing IP address ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.452 [INFO][4176] ipam_plugin.go 417: Releasing address using handleID ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" HandleID="k8s-pod-network.61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.453 [INFO][4176] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.453 [INFO][4176] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.462 [WARNING][4176] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" HandleID="k8s-pod-network.61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.462 [INFO][4176] ipam_plugin.go 445: Releasing address using workloadID ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" HandleID="k8s-pod-network.61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.465 [INFO][4176] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:48.471436 containerd[1470]: 2024-10-08 19:56:48.467 [INFO][4162] k8s.go 621: Teardown processing complete. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:48.472717 containerd[1470]: time="2024-10-08T19:56:48.472666232Z" level=info msg="TearDown network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\" successfully" Oct 8 19:56:48.473901 systemd[1]: run-netns-cni\x2d56ae3b2d\x2db203\x2d349b\x2d943c\x2db48fdb30ab47.mount: Deactivated successfully. Oct 8 19:56:48.475223 containerd[1470]: time="2024-10-08T19:56:48.475125587Z" level=info msg="StopPodSandbox for \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\" returns successfully" Oct 8 19:56:48.476992 containerd[1470]: time="2024-10-08T19:56:48.476844302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2n5q,Uid:36b7e211-8774-47a4-847d-9ea19c0b13c3,Namespace:calico-system,Attempt:1,}" Oct 8 19:56:48.477786 sshd[4118]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:48.490985 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:35564.service: Deactivated successfully. Oct 8 19:56:48.493512 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:56:48.495774 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:56:48.502647 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:35568.service - OpenSSH per-connection server daemon (10.0.0.1:35568). Oct 8 19:56:48.516781 systemd-logind[1457]: Removed session 12. Oct 8 19:56:48.567333 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 35568 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:56:48.569527 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:56:48.575655 systemd-logind[1457]: New session 13 of user core. Oct 8 19:56:48.583666 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:56:48.635262 systemd-networkd[1408]: cali521d082e563: Link UP Oct 8 19:56:48.636508 systemd-networkd[1408]: cali521d082e563: Gained carrier Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.549 [INFO][4200] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--g2n5q-eth0 csi-node-driver- calico-system 36b7e211-8774-47a4-847d-9ea19c0b13c3 862 0 2024-10-08 19:56:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-g2n5q eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali521d082e563 [] []}} ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Namespace="calico-system" Pod="csi-node-driver-g2n5q" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2n5q-" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.549 [INFO][4200] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Namespace="calico-system" Pod="csi-node-driver-g2n5q" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.580 [INFO][4214] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" HandleID="k8s-pod-network.c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.589 [INFO][4214] ipam_plugin.go 270: Auto assigning IP ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" HandleID="k8s-pod-network.c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e4db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-g2n5q", "timestamp":"2024-10-08 19:56:48.580197666 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.590 [INFO][4214] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.590 [INFO][4214] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.590 [INFO][4214] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.592 [INFO][4214] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" host="localhost" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.597 [INFO][4214] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.610 [INFO][4214] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.612 [INFO][4214] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.614 [INFO][4214] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.615 [INFO][4214] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" host="localhost" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.616 [INFO][4214] ipam.go 1685: Creating new handle: k8s-pod-network.c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280 Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.621 [INFO][4214] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" host="localhost" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.627 [INFO][4214] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" host="localhost" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.627 [INFO][4214] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" host="localhost" Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.627 [INFO][4214] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:48.667550 containerd[1470]: 2024-10-08 19:56:48.627 [INFO][4214] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" HandleID="k8s-pod-network.c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.668477 containerd[1470]: 2024-10-08 19:56:48.631 [INFO][4200] k8s.go 386: Populated endpoint ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Namespace="calico-system" Pod="csi-node-driver-g2n5q" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2n5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g2n5q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36b7e211-8774-47a4-847d-9ea19c0b13c3", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-g2n5q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali521d082e563", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:48.668477 containerd[1470]: 2024-10-08 19:56:48.632 [INFO][4200] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Namespace="calico-system" Pod="csi-node-driver-g2n5q" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.668477 containerd[1470]: 2024-10-08 19:56:48.632 [INFO][4200] dataplane_linux.go 68: Setting the host side veth name to cali521d082e563 ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Namespace="calico-system" Pod="csi-node-driver-g2n5q" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.668477 containerd[1470]: 2024-10-08 19:56:48.637 [INFO][4200] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Namespace="calico-system" Pod="csi-node-driver-g2n5q" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.668477 containerd[1470]: 2024-10-08 19:56:48.638 [INFO][4200] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Namespace="calico-system" Pod="csi-node-driver-g2n5q" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2n5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g2n5q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36b7e211-8774-47a4-847d-9ea19c0b13c3", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280", Pod:"csi-node-driver-g2n5q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali521d082e563", MAC:"22:47:f2:32:88:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:48.668477 containerd[1470]: 2024-10-08 19:56:48.653 [INFO][4200] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280" Namespace="calico-system" Pod="csi-node-driver-g2n5q" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:48.682163 kubelet[2599]: E1008 19:56:48.681828 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:48.701805 containerd[1470]: time="2024-10-08T19:56:48.701690607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:48.701805 containerd[1470]: time="2024-10-08T19:56:48.701757061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:48.701805 containerd[1470]: time="2024-10-08T19:56:48.701777630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:48.702631 containerd[1470]: time="2024-10-08T19:56:48.701925006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:48.722516 kubelet[2599]: I1008 19:56:48.722412 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gsz6t" podStartSLOduration=35.722368084 podStartE2EDuration="35.722368084s" podCreationTimestamp="2024-10-08 19:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:56:48.694932686 +0000 UTC m=+50.482639821" watchObservedRunningTime="2024-10-08 19:56:48.722368084 +0000 UTC m=+50.510075219" Oct 8 19:56:48.726937 systemd-networkd[1408]: cali1760974d104: Gained IPv6LL Oct 8 19:56:48.745813 systemd[1]: Started cri-containerd-c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280.scope - libcontainer container c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280. Oct 8 19:56:48.758974 sshd[4199]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:48.762868 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:35568.service: Deactivated successfully. Oct 8 19:56:48.765630 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:56:48.768877 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:56:48.772334 systemd-logind[1457]: Removed session 13. Oct 8 19:56:48.773221 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:56:48.787270 containerd[1470]: time="2024-10-08T19:56:48.787150306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2n5q,Uid:36b7e211-8774-47a4-847d-9ea19c0b13c3,Namespace:calico-system,Attempt:1,} returns sandbox id \"c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280\"" Oct 8 19:56:48.788977 containerd[1470]: time="2024-10-08T19:56:48.788956836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:56:49.315964 containerd[1470]: time="2024-10-08T19:56:49.315737898Z" level=info msg="StopPodSandbox for \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\"" Oct 8 19:56:49.316166 containerd[1470]: time="2024-10-08T19:56:49.315972458Z" level=info msg="StopPodSandbox for \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\"" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.381 [INFO][4330] k8s.go 608: Cleaning up netns ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.381 [INFO][4330] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" iface="eth0" netns="/var/run/netns/cni-c389da6d-40c5-bb3a-b764-6e2c860932e0" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.381 [INFO][4330] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" iface="eth0" netns="/var/run/netns/cni-c389da6d-40c5-bb3a-b764-6e2c860932e0" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.382 [INFO][4330] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" iface="eth0" netns="/var/run/netns/cni-c389da6d-40c5-bb3a-b764-6e2c860932e0" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.382 [INFO][4330] k8s.go 615: Releasing IP address(es) ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.382 [INFO][4330] utils.go 188: Calico CNI releasing IP address ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.405 [INFO][4345] ipam_plugin.go 417: Releasing address using handleID ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" HandleID="k8s-pod-network.437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.405 [INFO][4345] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.405 [INFO][4345] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.411 [WARNING][4345] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" HandleID="k8s-pod-network.437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.411 [INFO][4345] ipam_plugin.go 445: Releasing address using workloadID ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" HandleID="k8s-pod-network.437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.413 [INFO][4345] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:49.418513 containerd[1470]: 2024-10-08 19:56:49.416 [INFO][4330] k8s.go 621: Teardown processing complete. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:49.420528 containerd[1470]: time="2024-10-08T19:56:49.419628694Z" level=info msg="TearDown network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\" successfully" Oct 8 19:56:49.420528 containerd[1470]: time="2024-10-08T19:56:49.419676374Z" level=info msg="StopPodSandbox for \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\" returns successfully" Oct 8 19:56:49.421409 containerd[1470]: time="2024-10-08T19:56:49.421017460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669d4dfd7d-jrb8g,Uid:8c82dc71-2684-462e-a969-004022bf30fa,Namespace:calico-system,Attempt:1,}" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.383 [INFO][4331] k8s.go 608: Cleaning up netns ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.384 [INFO][4331] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" iface="eth0" netns="/var/run/netns/cni-a7d91ff1-6b8b-c608-d022-861abb860c5e" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.384 [INFO][4331] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" iface="eth0" netns="/var/run/netns/cni-a7d91ff1-6b8b-c608-d022-861abb860c5e" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.384 [INFO][4331] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" iface="eth0" netns="/var/run/netns/cni-a7d91ff1-6b8b-c608-d022-861abb860c5e" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.384 [INFO][4331] k8s.go 615: Releasing IP address(es) ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.384 [INFO][4331] utils.go 188: Calico CNI releasing IP address ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.407 [INFO][4350] ipam_plugin.go 417: Releasing address using handleID ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" HandleID="k8s-pod-network.b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.407 [INFO][4350] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.413 [INFO][4350] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.420 [WARNING][4350] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" HandleID="k8s-pod-network.b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.420 [INFO][4350] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" HandleID="k8s-pod-network.b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.426 [INFO][4350] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:49.431342 containerd[1470]: 2024-10-08 19:56:49.428 [INFO][4331] k8s.go 621: Teardown processing complete. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:49.432223 containerd[1470]: time="2024-10-08T19:56:49.431449403Z" level=info msg="TearDown network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\" successfully" Oct 8 19:56:49.432223 containerd[1470]: time="2024-10-08T19:56:49.431471315Z" level=info msg="StopPodSandbox for \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\" returns successfully" Oct 8 19:56:49.432297 kubelet[2599]: E1008 19:56:49.431877 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:49.432765 containerd[1470]: time="2024-10-08T19:56:49.432716561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f8pfw,Uid:33255908-c3af-4190-82ef-2570669d0a40,Namespace:kube-system,Attempt:1,}" Oct 8 19:56:49.461425 systemd[1]: run-containerd-runc-k8s.io-c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280-runc.Wlvsv0.mount: Deactivated successfully. Oct 8 19:56:49.461535 systemd[1]: run-netns-cni\x2dc389da6d\x2d40c5\x2dbb3a\x2db764\x2d6e2c860932e0.mount: Deactivated successfully. Oct 8 19:56:49.461620 systemd[1]: run-netns-cni\x2da7d91ff1\x2d6b8b\x2dc608\x2dd022\x2d861abb860c5e.mount: Deactivated successfully. Oct 8 19:56:49.568482 systemd-networkd[1408]: cali93dddbc0c16: Link UP Oct 8 19:56:49.568773 systemd-networkd[1408]: cali93dddbc0c16: Gained carrier Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.487 [INFO][4362] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0 calico-kube-controllers-669d4dfd7d- calico-system 8c82dc71-2684-462e-a969-004022bf30fa 895 0 2024-10-08 19:56:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:669d4dfd7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-669d4dfd7d-jrb8g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali93dddbc0c16 [] []}} ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Namespace="calico-system" Pod="calico-kube-controllers-669d4dfd7d-jrb8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.487 [INFO][4362] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Namespace="calico-system" Pod="calico-kube-controllers-669d4dfd7d-jrb8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.517 [INFO][4389] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" HandleID="k8s-pod-network.07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.530 [INFO][4389] ipam_plugin.go 270: Auto assigning IP ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" HandleID="k8s-pod-network.07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000505e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-669d4dfd7d-jrb8g", "timestamp":"2024-10-08 19:56:49.517517991 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.530 [INFO][4389] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.530 [INFO][4389] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.530 [INFO][4389] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.533 [INFO][4389] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" host="localhost" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.538 [INFO][4389] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.543 [INFO][4389] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.545 [INFO][4389] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.548 [INFO][4389] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.548 [INFO][4389] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" host="localhost" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.550 [INFO][4389] ipam.go 1685: Creating new handle: k8s-pod-network.07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.555 [INFO][4389] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" host="localhost" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.562 [INFO][4389] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" host="localhost" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.562 [INFO][4389] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" host="localhost" Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.562 [INFO][4389] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:49.584457 containerd[1470]: 2024-10-08 19:56:49.562 [INFO][4389] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" HandleID="k8s-pod-network.07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.585224 containerd[1470]: 2024-10-08 19:56:49.565 [INFO][4362] k8s.go 386: Populated endpoint ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Namespace="calico-system" Pod="calico-kube-controllers-669d4dfd7d-jrb8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0", GenerateName:"calico-kube-controllers-669d4dfd7d-", Namespace:"calico-system", SelfLink:"", UID:"8c82dc71-2684-462e-a969-004022bf30fa", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669d4dfd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-669d4dfd7d-jrb8g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93dddbc0c16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:49.585224 containerd[1470]: 2024-10-08 19:56:49.565 [INFO][4362] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Namespace="calico-system" Pod="calico-kube-controllers-669d4dfd7d-jrb8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.585224 containerd[1470]: 2024-10-08 19:56:49.565 [INFO][4362] dataplane_linux.go 68: Setting the host side veth name to cali93dddbc0c16 ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Namespace="calico-system" Pod="calico-kube-controllers-669d4dfd7d-jrb8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.585224 containerd[1470]: 2024-10-08 19:56:49.567 [INFO][4362] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Namespace="calico-system" Pod="calico-kube-controllers-669d4dfd7d-jrb8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.585224 containerd[1470]: 2024-10-08 19:56:49.568 [INFO][4362] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Namespace="calico-system" Pod="calico-kube-controllers-669d4dfd7d-jrb8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0", GenerateName:"calico-kube-controllers-669d4dfd7d-", Namespace:"calico-system", SelfLink:"", UID:"8c82dc71-2684-462e-a969-004022bf30fa", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669d4dfd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f", Pod:"calico-kube-controllers-669d4dfd7d-jrb8g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93dddbc0c16", MAC:"de:8e:dc:f8:69:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:49.585224 containerd[1470]: 2024-10-08 19:56:49.580 [INFO][4362] k8s.go 500: Wrote updated endpoint to datastore ContainerID="07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f" Namespace="calico-system" Pod="calico-kube-controllers-669d4dfd7d-jrb8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:49.611264 systemd-networkd[1408]: cali346747c0654: Link UP Oct 8 19:56:49.611841 systemd-networkd[1408]: cali346747c0654: Gained carrier Oct 8 19:56:49.614366 containerd[1470]: time="2024-10-08T19:56:49.614253502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:49.614366 containerd[1470]: time="2024-10-08T19:56:49.614320347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:49.614366 containerd[1470]: time="2024-10-08T19:56:49.614335937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:49.615458 containerd[1470]: time="2024-10-08T19:56:49.615401937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.487 [INFO][4372] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--f8pfw-eth0 coredns-76f75df574- kube-system 33255908-c3af-4190-82ef-2570669d0a40 896 0 2024-10-08 19:56:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-f8pfw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali346747c0654 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Namespace="kube-system" Pod="coredns-76f75df574-f8pfw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f8pfw-" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.487 [INFO][4372] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Namespace="kube-system" Pod="coredns-76f75df574-f8pfw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.518 [INFO][4388] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" HandleID="k8s-pod-network.0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.530 [INFO][4388] ipam_plugin.go 270: Auto assigning IP ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" HandleID="k8s-pod-network.0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f41a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-f8pfw", "timestamp":"2024-10-08 19:56:49.51816807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.530 [INFO][4388] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.562 [INFO][4388] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.562 [INFO][4388] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.565 [INFO][4388] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" host="localhost" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.575 [INFO][4388] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.584 [INFO][4388] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.587 [INFO][4388] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.589 [INFO][4388] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.590 [INFO][4388] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" host="localhost" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.592 [INFO][4388] ipam.go 1685: Creating new handle: k8s-pod-network.0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24 Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.597 [INFO][4388] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" host="localhost" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.605 [INFO][4388] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" host="localhost" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.605 [INFO][4388] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" host="localhost" Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.605 [INFO][4388] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:49.630637 containerd[1470]: 2024-10-08 19:56:49.605 [INFO][4388] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" HandleID="k8s-pod-network.0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.631231 containerd[1470]: 2024-10-08 19:56:49.608 [INFO][4372] k8s.go 386: Populated endpoint ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Namespace="kube-system" Pod="coredns-76f75df574-f8pfw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f8pfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f8pfw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33255908-c3af-4190-82ef-2570669d0a40", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-f8pfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali346747c0654", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:49.631231 containerd[1470]: 2024-10-08 19:56:49.609 [INFO][4372] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Namespace="kube-system" Pod="coredns-76f75df574-f8pfw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.631231 containerd[1470]: 2024-10-08 19:56:49.609 [INFO][4372] dataplane_linux.go 68: Setting the host side veth name to cali346747c0654 ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Namespace="kube-system" Pod="coredns-76f75df574-f8pfw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.631231 containerd[1470]: 2024-10-08 19:56:49.612 [INFO][4372] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Namespace="kube-system" Pod="coredns-76f75df574-f8pfw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.631231 containerd[1470]: 2024-10-08 19:56:49.613 [INFO][4372] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Namespace="kube-system" Pod="coredns-76f75df574-f8pfw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f8pfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f8pfw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33255908-c3af-4190-82ef-2570669d0a40", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24", Pod:"coredns-76f75df574-f8pfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali346747c0654", MAC:"2a:2a:bb:46:ea:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:49.631231 containerd[1470]: 2024-10-08 19:56:49.626 [INFO][4372] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24" Namespace="kube-system" Pod="coredns-76f75df574-f8pfw" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:49.640530 systemd[1]: Started cri-containerd-07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f.scope - libcontainer container 07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f. Oct 8 19:56:49.660242 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:56:49.681741 containerd[1470]: time="2024-10-08T19:56:49.681369399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:49.681741 containerd[1470]: time="2024-10-08T19:56:49.681437457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:49.681741 containerd[1470]: time="2024-10-08T19:56:49.681449770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:49.681741 containerd[1470]: time="2024-10-08T19:56:49.681560628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:49.694211 kubelet[2599]: E1008 19:56:49.694016 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:49.718348 systemd[1]: Started cri-containerd-0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24.scope - libcontainer container 0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24. Oct 8 19:56:49.719931 containerd[1470]: time="2024-10-08T19:56:49.719757004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669d4dfd7d-jrb8g,Uid:8c82dc71-2684-462e-a969-004022bf30fa,Namespace:calico-system,Attempt:1,} returns sandbox id \"07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f\"" Oct 8 19:56:49.734649 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:56:49.765715 containerd[1470]: time="2024-10-08T19:56:49.765633681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f8pfw,Uid:33255908-c3af-4190-82ef-2570669d0a40,Namespace:kube-system,Attempt:1,} returns sandbox id \"0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24\"" Oct 8 19:56:49.766795 kubelet[2599]: E1008 19:56:49.766759 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:49.779190 containerd[1470]: time="2024-10-08T19:56:49.779129803Z" level=info msg="CreateContainer within sandbox \"0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:56:49.804922 containerd[1470]: time="2024-10-08T19:56:49.804727040Z" level=info msg="CreateContainer within sandbox \"0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb8d8189590055073530a75314daeeefde61db63d916aaaa43d5d0d304f31aa5\"" Oct 8 19:56:49.806311 containerd[1470]: time="2024-10-08T19:56:49.806251901Z" level=info msg="StartContainer for \"eb8d8189590055073530a75314daeeefde61db63d916aaaa43d5d0d304f31aa5\"" Oct 8 19:56:49.838632 systemd[1]: Started cri-containerd-eb8d8189590055073530a75314daeeefde61db63d916aaaa43d5d0d304f31aa5.scope - libcontainer container eb8d8189590055073530a75314daeeefde61db63d916aaaa43d5d0d304f31aa5. Oct 8 19:56:49.897293 containerd[1470]: time="2024-10-08T19:56:49.897240023Z" level=info msg="StartContainer for \"eb8d8189590055073530a75314daeeefde61db63d916aaaa43d5d0d304f31aa5\" returns successfully" Oct 8 19:56:50.262338 systemd-networkd[1408]: cali521d082e563: Gained IPv6LL Oct 8 19:56:50.695828 kubelet[2599]: E1008 19:56:50.695333 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:50.699061 kubelet[2599]: E1008 19:56:50.699026 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:50.938181 kubelet[2599]: I1008 19:56:50.937277 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f8pfw" podStartSLOduration=37.937221979 podStartE2EDuration="37.937221979s" podCreationTimestamp="2024-10-08 19:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:56:50.801428253 +0000 UTC m=+52.589135388" watchObservedRunningTime="2024-10-08 19:56:50.937221979 +0000 UTC m=+52.724929114" Oct 8 19:56:51.031292 systemd-networkd[1408]: cali346747c0654: Gained IPv6LL Oct 8 19:56:51.170625 containerd[1470]: time="2024-10-08T19:56:51.170551141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:51.171765 containerd[1470]: time="2024-10-08T19:56:51.171691271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 19:56:51.173467 containerd[1470]: time="2024-10-08T19:56:51.173418560Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:51.176612 containerd[1470]: time="2024-10-08T19:56:51.176547215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:51.177801 containerd[1470]: time="2024-10-08T19:56:51.177747412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.38876144s" Oct 8 19:56:51.177801 containerd[1470]: time="2024-10-08T19:56:51.177795575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 19:56:51.179522 containerd[1470]: time="2024-10-08T19:56:51.179291064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:56:51.180515 containerd[1470]: time="2024-10-08T19:56:51.180474257Z" level=info msg="CreateContainer within sandbox \"c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:56:51.220666 containerd[1470]: time="2024-10-08T19:56:51.220573782Z" level=info msg="CreateContainer within sandbox \"c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5ebec9390fa2f0f4cf76daedb50f3875b14fcdda78854b94ab8241123e2167a4\"" Oct 8 19:56:51.221554 containerd[1470]: time="2024-10-08T19:56:51.221507151Z" level=info msg="StartContainer for \"5ebec9390fa2f0f4cf76daedb50f3875b14fcdda78854b94ab8241123e2167a4\"" Oct 8 19:56:51.263246 systemd[1]: Started cri-containerd-5ebec9390fa2f0f4cf76daedb50f3875b14fcdda78854b94ab8241123e2167a4.scope - libcontainer container 5ebec9390fa2f0f4cf76daedb50f3875b14fcdda78854b94ab8241123e2167a4. Oct 8 19:56:51.310656 containerd[1470]: time="2024-10-08T19:56:51.310518954Z" level=info msg="StartContainer for \"5ebec9390fa2f0f4cf76daedb50f3875b14fcdda78854b94ab8241123e2167a4\" returns successfully" Oct 8 19:56:51.351323 systemd-networkd[1408]: cali93dddbc0c16: Gained IPv6LL Oct 8 19:56:51.703363 kubelet[2599]: E1008 19:56:51.703308 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:52.705333 kubelet[2599]: E1008 19:56:52.705298 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:53.680426 containerd[1470]: time="2024-10-08T19:56:53.680372744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:53.740760 containerd[1470]: time="2024-10-08T19:56:53.740685956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 19:56:53.763940 containerd[1470]: time="2024-10-08T19:56:53.763892584Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:53.771269 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:48774.service - OpenSSH per-connection server daemon (10.0.0.1:48774). Oct 8 19:56:53.788974 containerd[1470]: time="2024-10-08T19:56:53.788907460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:53.789607 containerd[1470]: time="2024-10-08T19:56:53.789571254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.610246695s" Oct 8 19:56:53.789653 containerd[1470]: time="2024-10-08T19:56:53.789608516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 19:56:53.790469 containerd[1470]: time="2024-10-08T19:56:53.790159292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:56:53.798261 containerd[1470]: time="2024-10-08T19:56:53.798214499Z" level=info msg="CreateContainer within sandbox \"07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:56:53.830552 containerd[1470]: time="2024-10-08T19:56:53.830498362Z" level=info msg="CreateContainer within sandbox \"07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1a88da0dc39bd9759b63b96d04d07828ce466ce0d8f8ff577c508b337f9dc5fe\"" Oct 8 19:56:53.831249 containerd[1470]: time="2024-10-08T19:56:53.831197565Z" level=info msg="StartContainer for \"1a88da0dc39bd9759b63b96d04d07828ce466ce0d8f8ff577c508b337f9dc5fe\"" Oct 8 19:56:53.839227 sshd[4610]: Accepted publickey for core from 10.0.0.1 port 48774 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:56:53.840864 sshd[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:56:53.850554 systemd-logind[1457]: New session 14 of user core. Oct 8 19:56:53.859448 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:56:53.876538 systemd[1]: Started cri-containerd-1a88da0dc39bd9759b63b96d04d07828ce466ce0d8f8ff577c508b337f9dc5fe.scope - libcontainer container 1a88da0dc39bd9759b63b96d04d07828ce466ce0d8f8ff577c508b337f9dc5fe. Oct 8 19:56:53.923061 containerd[1470]: time="2024-10-08T19:56:53.921535026Z" level=info msg="StartContainer for \"1a88da0dc39bd9759b63b96d04d07828ce466ce0d8f8ff577c508b337f9dc5fe\" returns successfully" Oct 8 19:56:54.006235 sshd[4610]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:54.011298 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:48774.service: Deactivated successfully. Oct 8 19:56:54.013950 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:56:54.014799 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:56:54.015808 systemd-logind[1457]: Removed session 14. Oct 8 19:56:54.947017 kubelet[2599]: I1008 19:56:54.946463 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-669d4dfd7d-jrb8g" podStartSLOduration=31.878522778 podStartE2EDuration="35.94641288s" podCreationTimestamp="2024-10-08 19:56:19 +0000 UTC" firstStartedPulling="2024-10-08 19:56:49.722063982 +0000 UTC m=+51.509771117" lastFinishedPulling="2024-10-08 19:56:53.789954084 +0000 UTC m=+55.577661219" observedRunningTime="2024-10-08 19:56:54.94612294 +0000 UTC m=+56.733830085" watchObservedRunningTime="2024-10-08 19:56:54.94641288 +0000 UTC m=+56.734120015" Oct 8 19:56:57.075317 containerd[1470]: time="2024-10-08T19:56:57.075234134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:57.079803 containerd[1470]: time="2024-10-08T19:56:57.079729736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 19:56:57.089422 containerd[1470]: time="2024-10-08T19:56:57.089339602Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:57.106049 containerd[1470]: time="2024-10-08T19:56:57.105934170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:57.106796 containerd[1470]: time="2024-10-08T19:56:57.106740976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 3.316552968s" Oct 8 19:56:57.106796 containerd[1470]: time="2024-10-08T19:56:57.106784990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 19:56:57.108908 containerd[1470]: time="2024-10-08T19:56:57.108835075Z" level=info msg="CreateContainer within sandbox \"c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:56:57.142561 containerd[1470]: time="2024-10-08T19:56:57.142493165Z" level=info msg="CreateContainer within sandbox \"c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"210be7963e20f568122c71b90e410fe44214a01d08bf86f9229d0572ef44de51\"" Oct 8 19:56:57.143247 containerd[1470]: time="2024-10-08T19:56:57.143207884Z" level=info msg="StartContainer for \"210be7963e20f568122c71b90e410fe44214a01d08bf86f9229d0572ef44de51\"" Oct 8 19:56:57.184291 systemd[1]: Started cri-containerd-210be7963e20f568122c71b90e410fe44214a01d08bf86f9229d0572ef44de51.scope - libcontainer container 210be7963e20f568122c71b90e410fe44214a01d08bf86f9229d0572ef44de51. Oct 8 19:56:57.216240 containerd[1470]: time="2024-10-08T19:56:57.216196457Z" level=info msg="StartContainer for \"210be7963e20f568122c71b90e410fe44214a01d08bf86f9229d0572ef44de51\" returns successfully" Oct 8 19:56:57.386305 kubelet[2599]: I1008 19:56:57.386257 2599 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:56:57.387602 kubelet[2599]: I1008 19:56:57.387558 2599 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:56:57.731905 kubelet[2599]: I1008 19:56:57.731758 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-g2n5q" podStartSLOduration=30.413272338 podStartE2EDuration="38.731713662s" podCreationTimestamp="2024-10-08 19:56:19 +0000 UTC" firstStartedPulling="2024-10-08 19:56:48.788715754 +0000 UTC m=+50.576422889" lastFinishedPulling="2024-10-08 19:56:57.107157078 +0000 UTC m=+58.894864213" observedRunningTime="2024-10-08 19:56:57.731512234 +0000 UTC m=+59.519219369" watchObservedRunningTime="2024-10-08 19:56:57.731713662 +0000 UTC m=+59.519420797" Oct 8 19:56:58.302226 containerd[1470]: time="2024-10-08T19:56:58.302180251Z" level=info msg="StopPodSandbox for \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\"" Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.436 [WARNING][4749] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f8pfw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33255908-c3af-4190-82ef-2570669d0a40", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24", Pod:"coredns-76f75df574-f8pfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali346747c0654", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.436 [INFO][4749] k8s.go 608: Cleaning up netns ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.436 [INFO][4749] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" iface="eth0" netns="" Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.436 [INFO][4749] k8s.go 615: Releasing IP address(es) ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.436 [INFO][4749] utils.go 188: Calico CNI releasing IP address ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.455 [INFO][4758] ipam_plugin.go 417: Releasing address using handleID ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" HandleID="k8s-pod-network.b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.456 [INFO][4758] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.456 [INFO][4758] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.461 [WARNING][4758] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" HandleID="k8s-pod-network.b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.461 [INFO][4758] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" HandleID="k8s-pod-network.b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.463 [INFO][4758] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:58.467785 containerd[1470]: 2024-10-08 19:56:58.465 [INFO][4749] k8s.go 621: Teardown processing complete. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:58.468236 containerd[1470]: time="2024-10-08T19:56:58.467846132Z" level=info msg="TearDown network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\" successfully" Oct 8 19:56:58.468236 containerd[1470]: time="2024-10-08T19:56:58.467877733Z" level=info msg="StopPodSandbox for \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\" returns successfully" Oct 8 19:56:58.475050 containerd[1470]: time="2024-10-08T19:56:58.475010852Z" level=info msg="RemovePodSandbox for \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\"" Oct 8 19:56:58.477456 containerd[1470]: time="2024-10-08T19:56:58.477423482Z" level=info msg="Forcibly stopping sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\"" Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.514 [WARNING][4781] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f8pfw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33255908-c3af-4190-82ef-2570669d0a40", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0aa7290f3e7bd42907688b19732ae72da27e4dc41602aa4d277135d5ff751a24", Pod:"coredns-76f75df574-f8pfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali346747c0654", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.514 [INFO][4781] k8s.go 608: Cleaning up netns ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.515 [INFO][4781] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" iface="eth0" netns="" Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.515 [INFO][4781] k8s.go 615: Releasing IP address(es) ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.515 [INFO][4781] utils.go 188: Calico CNI releasing IP address ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.582 [INFO][4792] ipam_plugin.go 417: Releasing address using handleID ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" HandleID="k8s-pod-network.b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.582 [INFO][4792] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.583 [INFO][4792] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.787 [WARNING][4792] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" HandleID="k8s-pod-network.b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.787 [INFO][4792] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" HandleID="k8s-pod-network.b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Workload="localhost-k8s-coredns--76f75df574--f8pfw-eth0" Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.823 [INFO][4792] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:58.828220 containerd[1470]: 2024-10-08 19:56:58.826 [INFO][4781] k8s.go 621: Teardown processing complete. ContainerID="b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43" Oct 8 19:56:58.828762 containerd[1470]: time="2024-10-08T19:56:58.828266211Z" level=info msg="TearDown network for sandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\" successfully" Oct 8 19:56:59.021120 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:48786.service - OpenSSH per-connection server daemon (10.0.0.1:48786). Oct 8 19:56:59.071446 containerd[1470]: time="2024-10-08T19:56:59.071391799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:56:59.071581 containerd[1470]: time="2024-10-08T19:56:59.071501661Z" level=info msg="RemovePodSandbox \"b9a4d2a05e325051f1a261be1eeab53bf94efea5adf9cfd23666aaff67990d43\" returns successfully" Oct 8 19:56:59.072220 containerd[1470]: time="2024-10-08T19:56:59.072177552Z" level=info msg="StopPodSandbox for \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\"" Oct 8 19:56:59.078828 sshd[4801]: Accepted publickey for core from 10.0.0.1 port 48786 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:56:59.081395 sshd[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:56:59.088162 systemd-logind[1457]: New session 15 of user core. Oct 8 19:56:59.098336 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.115 [WARNING][4818] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gsz6t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a12bd77c-02fe-483c-91fa-839433fca8f9", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b", Pod:"coredns-76f75df574-gsz6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1760974d104", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.116 [INFO][4818] k8s.go 608: Cleaning up netns ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.116 [INFO][4818] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" iface="eth0" netns="" Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.116 [INFO][4818] k8s.go 615: Releasing IP address(es) ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.116 [INFO][4818] utils.go 188: Calico CNI releasing IP address ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.139 [INFO][4826] ipam_plugin.go 417: Releasing address using handleID ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" HandleID="k8s-pod-network.503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.140 [INFO][4826] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.140 [INFO][4826] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.145 [WARNING][4826] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" HandleID="k8s-pod-network.503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.145 [INFO][4826] ipam_plugin.go 445: Releasing address using workloadID ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" HandleID="k8s-pod-network.503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.148 [INFO][4826] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:59.156006 containerd[1470]: 2024-10-08 19:56:59.152 [INFO][4818] k8s.go 621: Teardown processing complete. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:59.156006 containerd[1470]: time="2024-10-08T19:56:59.155973081Z" level=info msg="TearDown network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\" successfully" Oct 8 19:56:59.156006 containerd[1470]: time="2024-10-08T19:56:59.156005684Z" level=info msg="StopPodSandbox for \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\" returns successfully" Oct 8 19:56:59.156896 containerd[1470]: time="2024-10-08T19:56:59.156793061Z" level=info msg="RemovePodSandbox for \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\"" Oct 8 19:56:59.156896 containerd[1470]: time="2024-10-08T19:56:59.156836785Z" level=info msg="Forcibly stopping sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\"" Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.194 [WARNING][4856] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gsz6t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a12bd77c-02fe-483c-91fa-839433fca8f9", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53f6d0a4331e3572a3325ae4c771085844f030fb5ca49e781cd032448c42620b", Pod:"coredns-76f75df574-gsz6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1760974d104", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.194 [INFO][4856] k8s.go 608: Cleaning up netns ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.195 [INFO][4856] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" iface="eth0" netns="" Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.195 [INFO][4856] k8s.go 615: Releasing IP address(es) ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.195 [INFO][4856] utils.go 188: Calico CNI releasing IP address ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.233 [INFO][4866] ipam_plugin.go 417: Releasing address using handleID ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" HandleID="k8s-pod-network.503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.233 [INFO][4866] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.233 [INFO][4866] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.240 [WARNING][4866] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" HandleID="k8s-pod-network.503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.240 [INFO][4866] ipam_plugin.go 445: Releasing address using workloadID ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" HandleID="k8s-pod-network.503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Workload="localhost-k8s-coredns--76f75df574--gsz6t-eth0" Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.242 [INFO][4866] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:59.249131 containerd[1470]: 2024-10-08 19:56:59.246 [INFO][4856] k8s.go 621: Teardown processing complete. ContainerID="503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971" Oct 8 19:56:59.249630 containerd[1470]: time="2024-10-08T19:56:59.249162422Z" level=info msg="TearDown network for sandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\" successfully" Oct 8 19:56:59.250939 sshd[4801]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:59.254120 containerd[1470]: time="2024-10-08T19:56:59.253962919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:56:59.254120 containerd[1470]: time="2024-10-08T19:56:59.254030178Z" level=info msg="RemovePodSandbox \"503014e5bb4b03c46cf15de62dd9f5f471761db89081ae79eb68fa57db8cd971\" returns successfully" Oct 8 19:56:59.254634 containerd[1470]: time="2024-10-08T19:56:59.254607019Z" level=info msg="StopPodSandbox for \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\"" Oct 8 19:56:59.256234 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:48786.service: Deactivated successfully. Oct 8 19:56:59.258667 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:56:59.259403 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:56:59.260324 systemd-logind[1457]: Removed session 15. Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.290 [WARNING][4890] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g2n5q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36b7e211-8774-47a4-847d-9ea19c0b13c3", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280", Pod:"csi-node-driver-g2n5q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali521d082e563", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.290 [INFO][4890] k8s.go 608: Cleaning up netns ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.290 [INFO][4890] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" iface="eth0" netns="" Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.290 [INFO][4890] k8s.go 615: Releasing IP address(es) ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.291 [INFO][4890] utils.go 188: Calico CNI releasing IP address ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.311 [INFO][4898] ipam_plugin.go 417: Releasing address using handleID ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" HandleID="k8s-pod-network.61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.311 [INFO][4898] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.311 [INFO][4898] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.317 [WARNING][4898] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" HandleID="k8s-pod-network.61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.317 [INFO][4898] ipam_plugin.go 445: Releasing address using workloadID ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" HandleID="k8s-pod-network.61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.320 [INFO][4898] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:59.325839 containerd[1470]: 2024-10-08 19:56:59.323 [INFO][4890] k8s.go 621: Teardown processing complete. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:59.326706 containerd[1470]: time="2024-10-08T19:56:59.325893746Z" level=info msg="TearDown network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\" successfully" Oct 8 19:56:59.326706 containerd[1470]: time="2024-10-08T19:56:59.325922892Z" level=info msg="StopPodSandbox for \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\" returns successfully" Oct 8 19:56:59.326706 containerd[1470]: time="2024-10-08T19:56:59.326512678Z" level=info msg="RemovePodSandbox for \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\"" Oct 8 19:56:59.326706 containerd[1470]: time="2024-10-08T19:56:59.326555320Z" level=info msg="Forcibly stopping sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\"" Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.362 [WARNING][4921] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g2n5q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36b7e211-8774-47a4-847d-9ea19c0b13c3", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c476897a19c3f567aab8af6318fa4a386d52b57a67471ea2221c27230dea5280", Pod:"csi-node-driver-g2n5q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali521d082e563", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.362 [INFO][4921] k8s.go 608: Cleaning up netns ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.362 [INFO][4921] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" iface="eth0" netns="" Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.362 [INFO][4921] k8s.go 615: Releasing IP address(es) ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.362 [INFO][4921] utils.go 188: Calico CNI releasing IP address ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.386 [INFO][4929] ipam_plugin.go 417: Releasing address using handleID ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" HandleID="k8s-pod-network.61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.386 [INFO][4929] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.386 [INFO][4929] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.411 [WARNING][4929] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" HandleID="k8s-pod-network.61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.411 [INFO][4929] ipam_plugin.go 445: Releasing address using workloadID ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" HandleID="k8s-pod-network.61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Workload="localhost-k8s-csi--node--driver--g2n5q-eth0" Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.413 [INFO][4929] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:59.418526 containerd[1470]: 2024-10-08 19:56:59.416 [INFO][4921] k8s.go 621: Teardown processing complete. ContainerID="61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857" Oct 8 19:56:59.419686 containerd[1470]: time="2024-10-08T19:56:59.418574038Z" level=info msg="TearDown network for sandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\" successfully" Oct 8 19:56:59.422732 containerd[1470]: time="2024-10-08T19:56:59.422686708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:56:59.422785 containerd[1470]: time="2024-10-08T19:56:59.422766932Z" level=info msg="RemovePodSandbox \"61a8f3c19126a482713efd13bf797d25f370337ec20d3a0ed65ed56a17e39857\" returns successfully" Oct 8 19:56:59.423412 containerd[1470]: time="2024-10-08T19:56:59.423374783Z" level=info msg="StopPodSandbox for \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\"" Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.461 [WARNING][4952] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0", GenerateName:"calico-kube-controllers-669d4dfd7d-", Namespace:"calico-system", SelfLink:"", UID:"8c82dc71-2684-462e-a969-004022bf30fa", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669d4dfd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f", Pod:"calico-kube-controllers-669d4dfd7d-jrb8g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93dddbc0c16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.461 [INFO][4952] k8s.go 608: Cleaning up netns ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.461 [INFO][4952] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" iface="eth0" netns="" Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.461 [INFO][4952] k8s.go 615: Releasing IP address(es) ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.461 [INFO][4952] utils.go 188: Calico CNI releasing IP address ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.487 [INFO][4960] ipam_plugin.go 417: Releasing address using handleID ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" HandleID="k8s-pod-network.437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.487 [INFO][4960] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.487 [INFO][4960] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.492 [WARNING][4960] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" HandleID="k8s-pod-network.437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.492 [INFO][4960] ipam_plugin.go 445: Releasing address using workloadID ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" HandleID="k8s-pod-network.437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.494 [INFO][4960] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:59.499934 containerd[1470]: 2024-10-08 19:56:59.497 [INFO][4952] k8s.go 621: Teardown processing complete. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:59.500368 containerd[1470]: time="2024-10-08T19:56:59.499968842Z" level=info msg="TearDown network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\" successfully" Oct 8 19:56:59.500368 containerd[1470]: time="2024-10-08T19:56:59.499996766Z" level=info msg="StopPodSandbox for \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\" returns successfully" Oct 8 19:56:59.500605 containerd[1470]: time="2024-10-08T19:56:59.500576212Z" level=info msg="RemovePodSandbox for \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\"" Oct 8 19:56:59.500641 containerd[1470]: time="2024-10-08T19:56:59.500615578Z" level=info msg="Forcibly stopping sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\"" Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.536 [WARNING][4982] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0", GenerateName:"calico-kube-controllers-669d4dfd7d-", Namespace:"calico-system", SelfLink:"", UID:"8c82dc71-2684-462e-a969-004022bf30fa", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669d4dfd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07764b2bff29e9500d4e36294594dbfc8164495dba0bf7e44ee99fefe435755f", Pod:"calico-kube-controllers-669d4dfd7d-jrb8g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93dddbc0c16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.537 [INFO][4982] k8s.go 608: Cleaning up netns ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.537 [INFO][4982] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" iface="eth0" netns="" Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.537 [INFO][4982] k8s.go 615: Releasing IP address(es) ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.537 [INFO][4982] utils.go 188: Calico CNI releasing IP address ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.558 [INFO][4989] ipam_plugin.go 417: Releasing address using handleID ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" HandleID="k8s-pod-network.437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.559 [INFO][4989] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.559 [INFO][4989] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.564 [WARNING][4989] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" HandleID="k8s-pod-network.437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.564 [INFO][4989] ipam_plugin.go 445: Releasing address using workloadID ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" HandleID="k8s-pod-network.437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Workload="localhost-k8s-calico--kube--controllers--669d4dfd7d--jrb8g-eth0" Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.566 [INFO][4989] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:59.571520 containerd[1470]: 2024-10-08 19:56:59.568 [INFO][4982] k8s.go 621: Teardown processing complete. ContainerID="437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5" Oct 8 19:56:59.572203 containerd[1470]: time="2024-10-08T19:56:59.571576718Z" level=info msg="TearDown network for sandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\" successfully" Oct 8 19:56:59.575612 containerd[1470]: time="2024-10-08T19:56:59.575577002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:56:59.575694 containerd[1470]: time="2024-10-08T19:56:59.575640555Z" level=info msg="RemovePodSandbox \"437622fd7922dcaf23c0a853e300768c66c88436fe85ae44179172c054d012e5\" returns successfully" Oct 8 19:57:04.267778 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:54350.service - OpenSSH per-connection server daemon (10.0.0.1:54350). Oct 8 19:57:04.303137 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 54350 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:04.304585 sshd[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:04.308358 systemd-logind[1457]: New session 16 of user core. Oct 8 19:57:04.314215 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:57:04.422011 sshd[5010]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:04.425673 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:54350.service: Deactivated successfully. Oct 8 19:57:04.427839 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:57:04.428544 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:57:04.429566 systemd-logind[1457]: Removed session 16. Oct 8 19:57:06.063918 kubelet[2599]: E1008 19:57:06.063883 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:09.435998 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:54352.service - OpenSSH per-connection server daemon (10.0.0.1:54352). Oct 8 19:57:09.473445 sshd[5067]: Accepted publickey for core from 10.0.0.1 port 54352 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:09.475523 sshd[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:09.481674 systemd-logind[1457]: New session 17 of user core. Oct 8 19:57:09.489399 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:57:09.629211 sshd[5067]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:09.633139 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:54352.service: Deactivated successfully. Oct 8 19:57:09.635281 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:57:09.635957 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:57:09.636849 systemd-logind[1457]: Removed session 17. Oct 8 19:57:14.641566 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:41090.service - OpenSSH per-connection server daemon (10.0.0.1:41090). Oct 8 19:57:14.677883 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 41090 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:14.680396 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:14.685676 systemd-logind[1457]: New session 18 of user core. Oct 8 19:57:14.696539 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:57:14.816009 sshd[5089]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:14.825150 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:41090.service: Deactivated successfully. Oct 8 19:57:14.826803 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:57:14.828414 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:57:14.833683 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:41092.service - OpenSSH per-connection server daemon (10.0.0.1:41092). Oct 8 19:57:14.834560 systemd-logind[1457]: Removed session 18. Oct 8 19:57:14.867661 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 41092 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:14.869765 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:14.873932 systemd-logind[1457]: New session 19 of user core. Oct 8 19:57:14.884274 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:57:15.277820 sshd[5105]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:15.286819 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:41092.service: Deactivated successfully. Oct 8 19:57:15.289121 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:57:15.291371 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:57:15.305623 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:41104.service - OpenSSH per-connection server daemon (10.0.0.1:41104). Oct 8 19:57:15.307007 systemd-logind[1457]: Removed session 19. Oct 8 19:57:15.336232 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 41104 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:15.338256 sshd[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:15.343319 systemd-logind[1457]: New session 20 of user core. Oct 8 19:57:15.360368 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:57:17.059409 sshd[5118]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:17.068365 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:41104.service: Deactivated successfully. Oct 8 19:57:17.070164 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:57:17.072990 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:57:17.079818 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:41106.service - OpenSSH per-connection server daemon (10.0.0.1:41106). Oct 8 19:57:17.081652 systemd-logind[1457]: Removed session 20. Oct 8 19:57:17.126574 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 41106 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:17.128743 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:17.134295 systemd-logind[1457]: New session 21 of user core. Oct 8 19:57:17.142409 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:57:17.380833 sshd[5152]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:17.390772 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:41106.service: Deactivated successfully. Oct 8 19:57:17.393246 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:57:17.395017 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:57:17.396626 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:41114.service - OpenSSH per-connection server daemon (10.0.0.1:41114). Oct 8 19:57:17.397779 systemd-logind[1457]: Removed session 21. Oct 8 19:57:17.450844 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 41114 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:17.452786 sshd[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:17.457832 systemd-logind[1457]: New session 22 of user core. Oct 8 19:57:17.473348 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:57:17.601305 sshd[5164]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:17.605794 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:41114.service: Deactivated successfully. Oct 8 19:57:17.607902 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:57:17.608604 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:57:17.609600 systemd-logind[1457]: Removed session 22. Oct 8 19:57:18.315393 kubelet[2599]: E1008 19:57:18.315357 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:22.617290 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). Oct 8 19:57:22.660968 sshd[5184]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:22.662461 sshd[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:22.666125 systemd-logind[1457]: New session 23 of user core. Oct 8 19:57:22.675216 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:57:22.804163 sshd[5184]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:22.808761 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:36266.service: Deactivated successfully. Oct 8 19:57:22.810904 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:57:22.811733 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:57:22.812784 systemd-logind[1457]: Removed session 23. Oct 8 19:57:24.976176 kubelet[2599]: I1008 19:57:24.976125 2599 topology_manager.go:215] "Topology Admit Handler" podUID="c7371dc1-4d2f-43dc-accd-9368421ba211" podNamespace="calico-apiserver" podName="calico-apiserver-5445d866f4-l5klb" Oct 8 19:57:24.987390 systemd[1]: Created slice kubepods-besteffort-podc7371dc1_4d2f_43dc_accd_9368421ba211.slice - libcontainer container kubepods-besteffort-podc7371dc1_4d2f_43dc_accd_9368421ba211.slice. Oct 8 19:57:25.135133 kubelet[2599]: I1008 19:57:25.132307 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c7371dc1-4d2f-43dc-accd-9368421ba211-calico-apiserver-certs\") pod \"calico-apiserver-5445d866f4-l5klb\" (UID: \"c7371dc1-4d2f-43dc-accd-9368421ba211\") " pod="calico-apiserver/calico-apiserver-5445d866f4-l5klb" Oct 8 19:57:25.135133 kubelet[2599]: I1008 19:57:25.132395 2599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fsbf\" (UniqueName: \"kubernetes.io/projected/c7371dc1-4d2f-43dc-accd-9368421ba211-kube-api-access-5fsbf\") pod \"calico-apiserver-5445d866f4-l5klb\" (UID: \"c7371dc1-4d2f-43dc-accd-9368421ba211\") " pod="calico-apiserver/calico-apiserver-5445d866f4-l5klb" Oct 8 19:57:25.233662 kubelet[2599]: E1008 19:57:25.233504 2599 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:57:25.233662 kubelet[2599]: E1008 19:57:25.233589 2599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7371dc1-4d2f-43dc-accd-9368421ba211-calico-apiserver-certs podName:c7371dc1-4d2f-43dc-accd-9368421ba211 nodeName:}" failed. No retries permitted until 2024-10-08 19:57:25.733570498 +0000 UTC m=+87.521277623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/c7371dc1-4d2f-43dc-accd-9368421ba211-calico-apiserver-certs") pod "calico-apiserver-5445d866f4-l5klb" (UID: "c7371dc1-4d2f-43dc-accd-9368421ba211") : secret "calico-apiserver-certs" not found Oct 8 19:57:25.315287 kubelet[2599]: E1008 19:57:25.315246 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:25.736929 kubelet[2599]: E1008 19:57:25.736861 2599 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:57:25.737176 kubelet[2599]: E1008 19:57:25.736961 2599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7371dc1-4d2f-43dc-accd-9368421ba211-calico-apiserver-certs podName:c7371dc1-4d2f-43dc-accd-9368421ba211 nodeName:}" failed. No retries permitted until 2024-10-08 19:57:26.736936956 +0000 UTC m=+88.524644122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/c7371dc1-4d2f-43dc-accd-9368421ba211-calico-apiserver-certs") pod "calico-apiserver-5445d866f4-l5klb" (UID: "c7371dc1-4d2f-43dc-accd-9368421ba211") : secret "calico-apiserver-certs" not found Oct 8 19:57:26.791294 containerd[1470]: time="2024-10-08T19:57:26.791226134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5445d866f4-l5klb,Uid:c7371dc1-4d2f-43dc-accd-9368421ba211,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:57:26.920795 systemd-networkd[1408]: calibb275e22b6f: Link UP Oct 8 19:57:26.921024 systemd-networkd[1408]: calibb275e22b6f: Gained carrier Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.841 [INFO][5210] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0 calico-apiserver-5445d866f4- calico-apiserver c7371dc1-4d2f-43dc-accd-9368421ba211 1155 0 2024-10-08 19:57:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5445d866f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5445d866f4-l5klb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibb275e22b6f [] []}} ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Namespace="calico-apiserver" Pod="calico-apiserver-5445d866f4-l5klb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5445d866f4--l5klb-" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.841 [INFO][5210] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Namespace="calico-apiserver" Pod="calico-apiserver-5445d866f4-l5klb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.874 [INFO][5222] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" HandleID="k8s-pod-network.89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Workload="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.884 [INFO][5222] ipam_plugin.go 270: Auto assigning IP ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" HandleID="k8s-pod-network.89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Workload="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000137df0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5445d866f4-l5klb", "timestamp":"2024-10-08 19:57:26.874558178 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.884 [INFO][5222] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.884 [INFO][5222] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.884 [INFO][5222] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.886 [INFO][5222] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" host="localhost" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.889 [INFO][5222] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.893 [INFO][5222] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.895 [INFO][5222] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.897 [INFO][5222] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.897 [INFO][5222] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" host="localhost" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.899 [INFO][5222] ipam.go 1685: Creating new handle: k8s-pod-network.89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674 Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.903 [INFO][5222] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" host="localhost" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.913 [INFO][5222] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" host="localhost" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.913 [INFO][5222] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" host="localhost" Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.913 [INFO][5222] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:57:26.935727 containerd[1470]: 2024-10-08 19:57:26.913 [INFO][5222] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" HandleID="k8s-pod-network.89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Workload="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" Oct 8 19:57:26.936551 containerd[1470]: 2024-10-08 19:57:26.917 [INFO][5210] k8s.go 386: Populated endpoint ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Namespace="calico-apiserver" Pod="calico-apiserver-5445d866f4-l5klb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0", GenerateName:"calico-apiserver-5445d866f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7371dc1-4d2f-43dc-accd-9368421ba211", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5445d866f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5445d866f4-l5klb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb275e22b6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:57:26.936551 containerd[1470]: 2024-10-08 19:57:26.917 [INFO][5210] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Namespace="calico-apiserver" Pod="calico-apiserver-5445d866f4-l5klb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" Oct 8 19:57:26.936551 containerd[1470]: 2024-10-08 19:57:26.917 [INFO][5210] dataplane_linux.go 68: Setting the host side veth name to calibb275e22b6f ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Namespace="calico-apiserver" Pod="calico-apiserver-5445d866f4-l5klb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" Oct 8 19:57:26.936551 containerd[1470]: 2024-10-08 19:57:26.920 [INFO][5210] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Namespace="calico-apiserver" Pod="calico-apiserver-5445d866f4-l5klb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" Oct 8 19:57:26.936551 containerd[1470]: 2024-10-08 19:57:26.920 [INFO][5210] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Namespace="calico-apiserver" Pod="calico-apiserver-5445d866f4-l5klb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0", GenerateName:"calico-apiserver-5445d866f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7371dc1-4d2f-43dc-accd-9368421ba211", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5445d866f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674", Pod:"calico-apiserver-5445d866f4-l5klb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb275e22b6f", MAC:"9e:51:32:22:c6:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:57:26.936551 containerd[1470]: 2024-10-08 19:57:26.930 [INFO][5210] k8s.go 500: Wrote updated endpoint to datastore ContainerID="89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674" Namespace="calico-apiserver" Pod="calico-apiserver-5445d866f4-l5klb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5445d866f4--l5klb-eth0" Oct 8 19:57:26.962521 containerd[1470]: time="2024-10-08T19:57:26.962385137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:57:26.962694 containerd[1470]: time="2024-10-08T19:57:26.962525224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:57:26.962694 containerd[1470]: time="2024-10-08T19:57:26.962538308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:26.962694 containerd[1470]: time="2024-10-08T19:57:26.962639270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:26.999378 systemd[1]: Started cri-containerd-89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674.scope - libcontainer container 89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674. Oct 8 19:57:27.012754 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:57:27.041490 containerd[1470]: time="2024-10-08T19:57:27.041358392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5445d866f4-l5klb,Uid:c7371dc1-4d2f-43dc-accd-9368421ba211,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674\"" Oct 8 19:57:27.043383 containerd[1470]: time="2024-10-08T19:57:27.043328746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:57:27.834930 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:36272.service - OpenSSH per-connection server daemon (10.0.0.1:36272). Oct 8 19:57:27.897909 sshd[5287]: Accepted publickey for core from 10.0.0.1 port 36272 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:27.900664 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:27.905142 systemd-logind[1457]: New session 24 of user core. Oct 8 19:57:27.915325 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:57:28.039721 sshd[5287]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:28.044705 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:36272.service: Deactivated successfully. Oct 8 19:57:28.046996 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:57:28.047755 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:57:28.048744 systemd-logind[1457]: Removed session 24. Oct 8 19:57:28.790384 systemd-networkd[1408]: calibb275e22b6f: Gained IPv6LL Oct 8 19:57:29.739523 containerd[1470]: time="2024-10-08T19:57:29.739441700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 19:57:29.745410 containerd[1470]: time="2024-10-08T19:57:29.745356043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:29.762042 containerd[1470]: time="2024-10-08T19:57:29.761944269Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:29.836292 containerd[1470]: time="2024-10-08T19:57:29.836217799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:29.837296 containerd[1470]: time="2024-10-08T19:57:29.837252203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.793869304s" Oct 8 19:57:29.837296 containerd[1470]: time="2024-10-08T19:57:29.837281850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 19:57:29.839150 containerd[1470]: time="2024-10-08T19:57:29.839102238Z" level=info msg="CreateContainer within sandbox \"89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:57:30.095644 containerd[1470]: time="2024-10-08T19:57:30.095446619Z" level=info msg="CreateContainer within sandbox \"89b045b1495c329a7a1de06613396c6d3e9f459d2888ff8c92bfecd550133674\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c3812267be322bae522d3b087ca64cf6e4af9c182cf49822b72b6840036e4256\"" Oct 8 19:57:30.096342 containerd[1470]: time="2024-10-08T19:57:30.096152719Z" level=info msg="StartContainer for \"c3812267be322bae522d3b087ca64cf6e4af9c182cf49822b72b6840036e4256\"" Oct 8 19:57:30.142266 systemd[1]: Started cri-containerd-c3812267be322bae522d3b087ca64cf6e4af9c182cf49822b72b6840036e4256.scope - libcontainer container c3812267be322bae522d3b087ca64cf6e4af9c182cf49822b72b6840036e4256. Oct 8 19:57:31.097426 containerd[1470]: time="2024-10-08T19:57:31.097372328Z" level=info msg="StartContainer for \"c3812267be322bae522d3b087ca64cf6e4af9c182cf49822b72b6840036e4256\" returns successfully" Oct 8 19:57:31.993395 kubelet[2599]: I1008 19:57:31.993321 2599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5445d866f4-l5klb" podStartSLOduration=5.19855762 podStartE2EDuration="7.993233055s" podCreationTimestamp="2024-10-08 19:57:24 +0000 UTC" firstStartedPulling="2024-10-08 19:57:27.042859856 +0000 UTC m=+88.830566991" lastFinishedPulling="2024-10-08 19:57:29.837535291 +0000 UTC m=+91.625242426" observedRunningTime="2024-10-08 19:57:31.989533482 +0000 UTC m=+93.777240617" watchObservedRunningTime="2024-10-08 19:57:31.993233055 +0000 UTC m=+93.780940190" Oct 8 19:57:32.316551 kubelet[2599]: E1008 19:57:32.316399 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:33.053665 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:35106.service - OpenSSH per-connection server daemon (10.0.0.1:35106). Oct 8 19:57:33.095024 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 35106 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:33.096771 sshd[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:33.100880 systemd-logind[1457]: New session 25 of user core. Oct 8 19:57:33.111309 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:57:34.504716 sshd[5357]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:34.508418 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:35106.service: Deactivated successfully. Oct 8 19:57:34.510232 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:57:34.510875 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:57:34.511805 systemd-logind[1457]: Removed session 25. Oct 8 19:57:35.315647 kubelet[2599]: E1008 19:57:35.315594 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:39.521500 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:35110.service - OpenSSH per-connection server daemon (10.0.0.1:35110). Oct 8 19:57:39.565143 sshd[5420]: Accepted publickey for core from 10.0.0.1 port 35110 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:39.566946 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:39.571528 systemd-logind[1457]: New session 26 of user core. Oct 8 19:57:39.587371 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:57:39.704719 sshd[5420]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:39.710061 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:35110.service: Deactivated successfully. Oct 8 19:57:39.712896 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:57:39.713780 systemd-logind[1457]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:57:39.715011 systemd-logind[1457]: Removed session 26. Oct 8 19:57:44.721348 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:55176.service - OpenSSH per-connection server daemon (10.0.0.1:55176). Oct 8 19:57:44.757593 sshd[5437]: Accepted publickey for core from 10.0.0.1 port 55176 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:44.759545 sshd[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:44.763986 systemd-logind[1457]: New session 27 of user core. Oct 8 19:57:44.771308 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:57:44.882657 sshd[5437]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:44.887098 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:55176.service: Deactivated successfully. Oct 8 19:57:44.889391 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:57:44.890219 systemd-logind[1457]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:57:44.891158 systemd-logind[1457]: Removed session 27.