Oct 8 19:51:52.000911 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 19:51:52.000946 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:51:52.000962 kernel: BIOS-provided physical RAM map: Oct 8 19:51:52.000970 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 8 19:51:52.000979 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 8 19:51:52.000987 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 8 19:51:52.001008 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 8 19:51:52.001017 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 8 19:51:52.001026 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 8 19:51:52.001035 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 8 19:51:52.001052 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 8 19:51:52.001061 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Oct 8 19:51:52.001070 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Oct 8 19:51:52.001079 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Oct 8 19:51:52.001090 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 8 19:51:52.001103 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 8 19:51:52.001116 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 8 19:51:52.001126 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 8 19:51:52.001135 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 8 19:51:52.001144 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 19:51:52.001154 kernel: NX (Execute Disable) protection: active Oct 8 19:51:52.001163 kernel: APIC: Static calls initialized Oct 8 19:51:52.001172 kernel: efi: EFI v2.7 by EDK II Oct 8 19:51:52.001181 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Oct 8 19:51:52.001190 kernel: SMBIOS 2.8 present. Oct 8 19:51:52.001199 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 8 19:51:52.001208 kernel: Hypervisor detected: KVM Oct 8 19:51:52.001223 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 19:51:52.001233 kernel: kvm-clock: using sched offset of 5711325381 cycles Oct 8 19:51:52.001245 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 19:51:52.001255 kernel: tsc: Detected 2794.748 MHz processor Oct 8 19:51:52.001264 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 19:51:52.001274 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 19:51:52.001284 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 8 19:51:52.001293 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 8 19:51:52.001303 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 19:51:52.001315 kernel: Using GB pages for direct mapping Oct 8 19:51:52.001324 kernel: Secure boot disabled Oct 8 19:51:52.001334 kernel: ACPI: Early table checksum verification disabled Oct 8 19:51:52.001344 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 8 19:51:52.001362 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:51:52.001372 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:52.001382 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:52.001395 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 8 19:51:52.001405 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:52.001415 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:52.001425 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:52.001435 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:52.001445 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 19:51:52.001455 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 8 19:51:52.001468 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 8 19:51:52.001478 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 8 19:51:52.001488 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 8 19:51:52.001498 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 8 19:51:52.001508 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 8 19:51:52.001518 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 8 19:51:52.001528 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 8 19:51:52.001540 kernel: No NUMA configuration found Oct 8 19:51:52.001550 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 8 19:51:52.001564 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 8 19:51:52.001574 kernel: Zone ranges: Oct 8 19:51:52.001584 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 19:51:52.001594 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 8 19:51:52.001603 kernel: Normal empty Oct 8 19:51:52.001627 kernel: Movable zone start for each node Oct 8 19:51:52.001637 kernel: Early memory node ranges Oct 8 19:51:52.001658 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 8 19:51:52.001686 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 8 19:51:52.001743 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 8 19:51:52.001764 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 8 19:51:52.001775 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 8 19:51:52.001803 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 8 19:51:52.001814 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 8 19:51:52.001827 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:51:52.001837 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 8 19:51:52.001847 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 8 19:51:52.001857 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:51:52.001867 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 8 19:51:52.001882 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 8 19:51:52.001892 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 8 19:51:52.001902 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 19:51:52.001912 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 19:51:52.001922 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 19:51:52.001932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 19:51:52.001942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 19:51:52.001952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 19:51:52.001962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 19:51:52.001975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 19:51:52.001985 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 19:51:52.002005 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 19:51:52.002015 kernel: TSC deadline timer available Oct 8 19:51:52.002025 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 8 19:51:52.002035 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 19:51:52.002045 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 8 19:51:52.002055 kernel: kvm-guest: setup PV sched yield Oct 8 19:51:52.002065 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 8 19:51:52.002080 kernel: Booting paravirtualized kernel on KVM Oct 8 19:51:52.002090 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 19:51:52.002100 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 8 19:51:52.002111 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 8 19:51:52.002121 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 8 19:51:52.002131 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 8 19:51:52.002140 kernel: kvm-guest: PV spinlocks enabled Oct 8 19:51:52.002151 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 19:51:52.002166 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:51:52.002180 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:51:52.002190 kernel: random: crng init done Oct 8 19:51:52.002201 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:51:52.002213 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:51:52.002224 kernel: Fallback order for Node 0: 0 Oct 8 19:51:52.002235 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 8 19:51:52.002245 kernel: Policy zone: DMA32 Oct 8 19:51:52.002255 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:51:52.002269 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 171124K reserved, 0K cma-reserved) Oct 8 19:51:52.002279 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:51:52.002289 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 19:51:52.002299 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 19:51:52.002309 kernel: Dynamic Preempt: voluntary Oct 8 19:51:52.002329 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:51:52.002344 kernel: rcu: RCU event tracing is enabled. Oct 8 19:51:52.002354 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:51:52.002365 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:51:52.002376 kernel: Rude variant of Tasks RCU enabled. Oct 8 19:51:52.002386 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:51:52.002397 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:51:52.002410 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:51:52.002421 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 8 19:51:52.002432 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:51:52.002442 kernel: Console: colour dummy device 80x25 Oct 8 19:51:52.002455 kernel: printk: console [ttyS0] enabled Oct 8 19:51:52.002469 kernel: ACPI: Core revision 20230628 Oct 8 19:51:52.002480 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 19:51:52.002490 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 19:51:52.002501 kernel: x2apic enabled Oct 8 19:51:52.002511 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 19:51:52.002522 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 8 19:51:52.002532 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 8 19:51:52.002543 kernel: kvm-guest: setup PV IPIs Oct 8 19:51:52.002553 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 19:51:52.002567 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 19:51:52.002578 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 8 19:51:52.002588 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 19:51:52.002599 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 19:51:52.002609 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 19:51:52.002620 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 19:51:52.002630 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 19:51:52.002641 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 19:51:52.002652 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 19:51:52.002665 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 19:51:52.002676 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 19:51:52.002689 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 19:51:52.002700 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 19:51:52.002710 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 19:51:52.002737 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 19:51:52.002748 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 19:51:52.002758 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 19:51:52.002773 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 19:51:52.002783 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 19:51:52.002794 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 19:51:52.002805 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 19:51:52.002815 kernel: Freeing SMP alternatives memory: 32K Oct 8 19:51:52.002826 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:51:52.002836 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:51:52.002847 kernel: landlock: Up and running. Oct 8 19:51:52.002857 kernel: SELinux: Initializing. Oct 8 19:51:52.002871 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:51:52.002881 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:51:52.002892 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 19:51:52.002903 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:51:52.002913 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:51:52.002924 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:51:52.002935 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 19:51:52.002945 kernel: ... version: 0 Oct 8 19:51:52.002956 kernel: ... bit width: 48 Oct 8 19:51:52.002969 kernel: ... generic registers: 6 Oct 8 19:51:52.002979 kernel: ... value mask: 0000ffffffffffff Oct 8 19:51:52.002990 kernel: ... max period: 00007fffffffffff Oct 8 19:51:52.003010 kernel: ... fixed-purpose events: 0 Oct 8 19:51:52.003020 kernel: ... event mask: 000000000000003f Oct 8 19:51:52.003031 kernel: signal: max sigframe size: 1776 Oct 8 19:51:52.003041 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:51:52.003052 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:51:52.003063 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:51:52.003076 kernel: smpboot: x86: Booting SMP configuration: Oct 8 19:51:52.003087 kernel: .... node #0, CPUs: #1 #2 #3 Oct 8 19:51:52.003097 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:51:52.003108 kernel: smpboot: Max logical packages: 1 Oct 8 19:51:52.003118 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 8 19:51:52.003129 kernel: devtmpfs: initialized Oct 8 19:51:52.003139 kernel: x86/mm: Memory block size: 128MB Oct 8 19:51:52.003149 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 8 19:51:52.003160 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 8 19:51:52.003174 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 8 19:51:52.003185 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 8 19:51:52.003196 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 8 19:51:52.003207 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:51:52.003217 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:51:52.003228 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:51:52.003239 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:51:52.003249 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:51:52.003260 kernel: audit: type=2000 audit(1728417110.637:1): state=initialized audit_enabled=0 res=1 Oct 8 19:51:52.003281 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:51:52.003304 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 19:51:52.003324 kernel: cpuidle: using governor menu Oct 8 19:51:52.003346 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:51:52.003369 kernel: dca service started, version 1.12.1 Oct 8 19:51:52.003393 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 19:51:52.003418 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 8 19:51:52.003432 kernel: PCI: Using configuration type 1 for base access Oct 8 19:51:52.003443 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 19:51:52.003458 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:51:52.003468 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:51:52.003479 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:51:52.003489 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:51:52.003500 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:51:52.003510 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:51:52.003521 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:51:52.003531 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:51:52.003542 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:51:52.003556 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 19:51:52.003567 kernel: ACPI: Interpreter enabled Oct 8 19:51:52.003577 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 19:51:52.003588 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 19:51:52.003599 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 19:51:52.003609 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 19:51:52.003620 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 19:51:52.003631 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:51:52.004045 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:51:52.004221 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 19:51:52.004376 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 19:51:52.004389 kernel: PCI host bridge to bus 0000:00 Oct 8 19:51:52.004559 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 19:51:52.004702 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 19:51:52.004871 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 19:51:52.005032 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 8 19:51:52.005177 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 19:51:52.005325 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 8 19:51:52.005550 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:51:52.005796 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 19:51:52.005983 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 8 19:51:52.006161 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 8 19:51:52.006322 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 8 19:51:52.006484 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 8 19:51:52.006730 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 8 19:51:52.006945 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 19:51:52.007147 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:51:52.007317 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 8 19:51:52.007491 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 8 19:51:52.007656 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 8 19:51:52.007913 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 8 19:51:52.008101 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 8 19:51:52.008268 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 8 19:51:52.008411 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 8 19:51:52.008569 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 19:51:52.008736 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 8 19:51:52.008910 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 8 19:51:52.009070 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 8 19:51:52.009200 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 8 19:51:52.009365 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 19:51:52.009539 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 19:51:52.009779 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 19:51:52.009979 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 8 19:51:52.010161 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 8 19:51:52.010346 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 19:51:52.010513 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 8 19:51:52.010529 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 19:51:52.010540 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 19:51:52.010551 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 19:51:52.010567 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 19:51:52.010578 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 19:51:52.010589 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 19:51:52.010600 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 19:51:52.010610 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 19:51:52.010621 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 19:51:52.010632 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 19:51:52.010642 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 19:51:52.010653 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 19:51:52.010667 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 19:51:52.010678 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 19:51:52.010689 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 19:51:52.010700 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 19:51:52.010710 kernel: iommu: Default domain type: Translated Oct 8 19:51:52.010813 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 19:51:52.010824 kernel: efivars: Registered efivars operations Oct 8 19:51:52.010834 kernel: PCI: Using ACPI for IRQ routing Oct 8 19:51:52.010845 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 19:51:52.010861 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 8 19:51:52.010872 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 8 19:51:52.010882 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 8 19:51:52.010893 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 8 19:51:52.011077 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 19:51:52.011312 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 19:51:52.011478 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 19:51:52.011492 kernel: vgaarb: loaded Oct 8 19:51:52.011501 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 19:51:52.011517 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 19:51:52.011526 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 19:51:52.011536 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:51:52.011546 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:51:52.011555 kernel: pnp: PnP ACPI init Oct 8 19:51:52.011739 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 19:51:52.011755 kernel: pnp: PnP ACPI: found 6 devices Oct 8 19:51:52.011765 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 19:51:52.011779 kernel: NET: Registered PF_INET protocol family Oct 8 19:51:52.011789 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:51:52.011799 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:51:52.011809 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:51:52.011819 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:51:52.011828 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:51:52.011838 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:51:52.011847 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:51:52.011857 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:51:52.011870 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:51:52.011880 kernel: NET: Registered PF_XDP protocol family Oct 8 19:51:52.012040 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 8 19:51:52.012185 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 8 19:51:52.012306 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 19:51:52.012436 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 19:51:52.012584 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 19:51:52.012761 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 8 19:51:52.012918 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 19:51:52.013084 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 8 19:51:52.013102 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:51:52.013113 kernel: Initialise system trusted keyrings Oct 8 19:51:52.013124 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:51:52.013135 kernel: Key type asymmetric registered Oct 8 19:51:52.013146 kernel: Asymmetric key parser 'x509' registered Oct 8 19:51:52.013157 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 19:51:52.013174 kernel: io scheduler mq-deadline registered Oct 8 19:51:52.013185 kernel: io scheduler kyber registered Oct 8 19:51:52.013195 kernel: io scheduler bfq registered Oct 8 19:51:52.013206 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 19:51:52.013218 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 19:51:52.013229 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 19:51:52.013240 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 8 19:51:52.013251 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:51:52.013263 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 19:51:52.013274 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 19:51:52.013289 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 19:51:52.013299 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 19:51:52.013488 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 19:51:52.013503 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 19:51:52.013639 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 19:51:52.013853 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T19:51:51 UTC (1728417111) Oct 8 19:51:52.013980 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 19:51:52.014008 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 19:51:52.014016 kernel: efifb: probing for efifb Oct 8 19:51:52.014024 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Oct 8 19:51:52.014032 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Oct 8 19:51:52.014040 kernel: efifb: scrolling: redraw Oct 8 19:51:52.014048 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Oct 8 19:51:52.014056 kernel: Console: switching to colour frame buffer device 100x37 Oct 8 19:51:52.014082 kernel: fb0: EFI VGA frame buffer device Oct 8 19:51:52.014092 kernel: pstore: Using crash dump compression: deflate Oct 8 19:51:52.014103 kernel: pstore: Registered efi_pstore as persistent store backend Oct 8 19:51:52.014111 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:51:52.014119 kernel: Segment Routing with IPv6 Oct 8 19:51:52.014127 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:51:52.014135 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:51:52.014143 kernel: Key type dns_resolver registered Oct 8 19:51:52.014151 kernel: IPI shorthand broadcast: enabled Oct 8 19:51:52.014159 kernel: sched_clock: Marking stable (1217002955, 308690006)->(1733463042, -207770081) Oct 8 19:51:52.014167 kernel: registered taskstats version 1 Oct 8 19:51:52.014175 kernel: Loading compiled-in X.509 certificates Oct 8 19:51:52.014186 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 19:51:52.014194 kernel: Key type .fscrypt registered Oct 8 19:51:52.014202 kernel: Key type fscrypt-provisioning registered Oct 8 19:51:52.014210 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:51:52.014218 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:51:52.014226 kernel: ima: No architecture policies found Oct 8 19:51:52.014234 kernel: clk: Disabling unused clocks Oct 8 19:51:52.014242 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 19:51:52.014253 kernel: Write protecting the kernel read-only data: 36864k Oct 8 19:51:52.014261 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 19:51:52.014269 kernel: Run /init as init process Oct 8 19:51:52.014277 kernel: with arguments: Oct 8 19:51:52.014285 kernel: /init Oct 8 19:51:52.014293 kernel: with environment: Oct 8 19:51:52.014301 kernel: HOME=/ Oct 8 19:51:52.014309 kernel: TERM=linux Oct 8 19:51:52.014317 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:51:52.014330 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:51:52.014341 systemd[1]: Detected virtualization kvm. Oct 8 19:51:52.014349 systemd[1]: Detected architecture x86-64. Oct 8 19:51:52.014358 systemd[1]: Running in initrd. Oct 8 19:51:52.014371 systemd[1]: No hostname configured, using default hostname. Oct 8 19:51:52.014379 systemd[1]: Hostname set to . Oct 8 19:51:52.014388 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:51:52.014397 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:51:52.014405 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:51:52.014414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:51:52.014423 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:51:52.014432 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:51:52.014443 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:51:52.014452 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:51:52.014462 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:51:52.014471 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:51:52.014479 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:51:52.014488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:51:52.014496 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:51:52.014507 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:51:52.014516 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:51:52.014525 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:51:52.014533 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:51:52.014541 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:51:52.014550 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:51:52.014559 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:51:52.014567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:51:52.014578 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:51:52.014587 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:51:52.014595 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:51:52.014604 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:51:52.014613 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:51:52.014621 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:51:52.014630 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:51:52.014638 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:51:52.014647 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:51:52.014658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:52.014666 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:51:52.014675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:51:52.014683 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:51:52.014727 systemd-journald[193]: Collecting audit messages is disabled. Oct 8 19:51:52.014762 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:51:52.014771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:52.014779 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:51:52.014791 systemd-journald[193]: Journal started Oct 8 19:51:52.014810 systemd-journald[193]: Runtime Journal (/run/log/journal/de3de2f3b8b64e3c9ba8b1f66ba9b3e8) is 6.0M, max 48.3M, 42.2M free. Oct 8 19:51:52.021389 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:51:52.003916 systemd-modules-load[195]: Inserted module 'overlay' Oct 8 19:51:52.028746 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:51:52.028811 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:51:52.034913 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:51:52.057190 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:51:52.057465 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:51:52.062092 systemd-modules-load[195]: Inserted module 'br_netfilter' Oct 8 19:51:52.062577 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:52.065207 kernel: Bridge firewalling registered Oct 8 19:51:52.065047 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:51:52.066694 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:51:52.071080 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:51:52.083218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:51:52.086164 dracut-cmdline[221]: dracut-dracut-053 Oct 8 19:51:52.087466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:51:52.089180 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:51:52.115041 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:51:52.149610 systemd-resolved[256]: Positive Trust Anchors: Oct 8 19:51:52.149631 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:51:52.149662 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:51:52.152417 systemd-resolved[256]: Defaulting to hostname 'linux'. Oct 8 19:51:52.153657 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:51:52.167348 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:51:52.222765 kernel: SCSI subsystem initialized Oct 8 19:51:52.232755 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:51:52.243747 kernel: iscsi: registered transport (tcp) Oct 8 19:51:52.285858 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:51:52.285960 kernel: QLogic iSCSI HBA Driver Oct 8 19:51:52.347054 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:51:52.394871 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:51:52.421439 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:51:52.421530 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:51:52.421543 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:51:52.481766 kernel: raid6: avx2x4 gen() 29841 MB/s Oct 8 19:51:52.498782 kernel: raid6: avx2x2 gen() 26888 MB/s Oct 8 19:51:52.515901 kernel: raid6: avx2x1 gen() 21395 MB/s Oct 8 19:51:52.516005 kernel: raid6: using algorithm avx2x4 gen() 29841 MB/s Oct 8 19:51:52.534116 kernel: raid6: .... xor() 5832 MB/s, rmw enabled Oct 8 19:51:52.534235 kernel: raid6: using avx2x2 recovery algorithm Oct 8 19:51:52.562767 kernel: xor: automatically using best checksumming function avx Oct 8 19:51:52.733765 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:51:52.748378 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:51:52.760053 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:51:52.775945 systemd-udevd[413]: Using default interface naming scheme 'v255'. Oct 8 19:51:52.781519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:51:52.789882 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:51:52.810002 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Oct 8 19:51:52.854271 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:51:52.877161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:51:52.956178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:51:52.987826 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:51:53.000755 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 8 19:51:53.002916 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:51:53.004776 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:51:53.008427 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:51:53.018956 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:51:53.021271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:51:53.029377 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:51:53.029407 kernel: GPT:9289727 != 19775487 Oct 8 19:51:53.029425 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:51:53.029437 kernel: GPT:9289727 != 19775487 Oct 8 19:51:53.029449 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:51:53.029461 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:53.033893 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:51:53.046347 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 19:51:53.042913 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:51:53.048958 kernel: libata version 3.00 loaded. Oct 8 19:51:53.043142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:53.048146 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:51:53.054061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:51:53.055671 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:53.058476 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:53.067739 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 19:51:53.067773 kernel: AES CTR mode by8 optimization enabled Oct 8 19:51:53.072543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:53.088085 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 19:51:53.088306 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 19:51:53.089142 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:51:53.096498 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 19:51:53.096787 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 19:51:53.096989 kernel: scsi host0: ahci Oct 8 19:51:53.099737 kernel: scsi host1: ahci Oct 8 19:51:53.101157 kernel: scsi host2: ahci Oct 8 19:51:53.105734 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (455) Oct 8 19:51:53.107734 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (470) Oct 8 19:51:53.109755 kernel: scsi host3: ahci Oct 8 19:51:53.110756 kernel: scsi host4: ahci Oct 8 19:51:53.114789 kernel: scsi host5: ahci Oct 8 19:51:53.114995 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 8 19:51:53.115008 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 8 19:51:53.115021 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 8 19:51:53.115031 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 8 19:51:53.125893 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:51:53.135326 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 8 19:51:53.135369 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 8 19:51:53.145110 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:51:53.151284 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:51:53.152586 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:51:53.175690 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:51:53.192011 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:51:53.193325 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:51:53.193413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:53.195695 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:53.199128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:53.217477 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:53.225549 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:51:53.251752 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:53.458766 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 19:51:53.458856 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:53.459755 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:53.459790 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:53.460759 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 19:51:53.461743 kernel: ata3.00: applying bridge limits Oct 8 19:51:53.462746 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:53.462775 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:53.463759 kernel: ata3.00: configured for UDMA/100 Oct 8 19:51:53.464750 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:51:53.472671 disk-uuid[555]: Primary Header is updated. Oct 8 19:51:53.472671 disk-uuid[555]: Secondary Entries is updated. Oct 8 19:51:53.472671 disk-uuid[555]: Secondary Header is updated. Oct 8 19:51:53.476533 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:53.480752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:53.532317 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 19:51:53.532755 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:51:53.548742 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:51:54.495751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:54.496185 disk-uuid[570]: The operation has completed successfully. Oct 8 19:51:54.528368 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:51:54.528530 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:51:54.554156 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:51:54.573616 sh[595]: Success Oct 8 19:51:54.631781 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 19:51:54.671349 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:51:54.707825 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:51:54.709888 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:51:54.750330 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 19:51:54.750418 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:54.750430 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:51:54.751343 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:51:54.752088 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:51:54.757269 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:51:54.759617 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:51:54.766879 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:51:54.768056 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:51:54.812500 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:54.812550 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:54.812561 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:51:54.819302 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:51:54.830937 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:51:54.865950 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:54.922495 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:51:54.934873 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:51:55.012875 systemd-networkd[773]: lo: Link UP Oct 8 19:51:55.012886 systemd-networkd[773]: lo: Gained carrier Oct 8 19:51:55.014809 systemd-networkd[773]: Enumeration completed Oct 8 19:51:55.014985 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:51:55.015311 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:55.015316 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:51:55.021613 systemd-networkd[773]: eth0: Link UP Oct 8 19:51:55.021618 systemd-networkd[773]: eth0: Gained carrier Oct 8 19:51:55.021629 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:55.022648 systemd[1]: Reached target network.target - Network. Oct 8 19:51:55.045868 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:51:55.422668 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:51:55.519949 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:51:55.585079 ignition[778]: Ignition 2.19.0 Oct 8 19:51:55.585091 ignition[778]: Stage: fetch-offline Oct 8 19:51:55.585139 ignition[778]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:55.585151 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:55.585267 ignition[778]: parsed url from cmdline: "" Oct 8 19:51:55.585271 ignition[778]: no config URL provided Oct 8 19:51:55.585277 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:51:55.585287 ignition[778]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:51:55.585318 ignition[778]: op(1): [started] loading QEMU firmware config module Oct 8 19:51:55.585323 ignition[778]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:51:55.629097 ignition[778]: op(1): [finished] loading QEMU firmware config module Oct 8 19:51:55.670989 ignition[778]: parsing config with SHA512: f0fb92d3a923243d68146fa966db9fd22e92e0c09405f64658ff040e3585d144b1d4868b53ab1d23d80ecfb27b4299346269f267cd0dda9f54584550163d9c1d Oct 8 19:51:55.676530 unknown[778]: fetched base config from "system" Oct 8 19:51:55.676549 unknown[778]: fetched user config from "qemu" Oct 8 19:51:55.677405 ignition[778]: fetch-offline: fetch-offline passed Oct 8 19:51:55.677501 ignition[778]: Ignition finished successfully Oct 8 19:51:55.712652 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:51:55.713206 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:51:55.717990 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:51:55.734990 ignition[788]: Ignition 2.19.0 Oct 8 19:51:55.735001 ignition[788]: Stage: kargs Oct 8 19:51:55.795349 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:51:55.735177 ignition[788]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:55.735189 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:55.736004 ignition[788]: kargs: kargs passed Oct 8 19:51:55.736054 ignition[788]: Ignition finished successfully Oct 8 19:51:55.802897 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:51:55.819294 ignition[796]: Ignition 2.19.0 Oct 8 19:51:55.819307 ignition[796]: Stage: disks Oct 8 19:51:55.819481 ignition[796]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:55.819494 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:55.902292 ignition[796]: disks: disks passed Oct 8 19:51:55.902975 ignition[796]: Ignition finished successfully Oct 8 19:51:55.905892 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:51:55.959695 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:51:55.961058 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:51:55.963612 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:51:55.966142 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:51:55.968217 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:51:55.981021 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:51:55.997863 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:51:56.446223 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:51:56.477860 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:51:56.634763 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 19:51:56.635889 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:51:56.687685 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:51:56.699911 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:51:56.701981 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:51:56.703218 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:51:56.708539 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Oct 8 19:51:56.708561 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:56.703269 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:51:56.715293 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:56.715316 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:51:56.715328 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:51:56.703295 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:51:56.711346 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:51:56.716469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:51:56.719583 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:51:56.762142 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:51:56.766803 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:51:56.770761 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:51:56.776181 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:51:56.877668 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:51:56.895822 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:51:56.945942 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:51:56.954412 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:51:56.956094 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:56.973390 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:51:57.017845 systemd-networkd[773]: eth0: Gained IPv6LL Oct 8 19:51:57.078917 ignition[931]: INFO : Ignition 2.19.0 Oct 8 19:51:57.078917 ignition[931]: INFO : Stage: mount Oct 8 19:51:57.080881 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:57.080881 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:57.083589 ignition[931]: INFO : mount: mount passed Oct 8 19:51:57.084330 ignition[931]: INFO : Ignition finished successfully Oct 8 19:51:57.087467 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:51:57.106805 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:51:57.653976 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:51:57.738875 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Oct 8 19:51:57.742412 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:57.742444 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:57.742474 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:51:57.746769 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:51:57.748751 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:51:57.781738 ignition[958]: INFO : Ignition 2.19.0 Oct 8 19:51:57.781738 ignition[958]: INFO : Stage: files Oct 8 19:51:57.810303 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:57.810303 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:57.813565 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:51:57.815826 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:51:57.815826 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:51:57.821244 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:51:57.822856 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:51:57.824883 unknown[958]: wrote ssh authorized keys file for user: core Oct 8 19:51:57.826180 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:51:57.829106 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:51:57.831262 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 19:51:57.876598 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:51:58.034039 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:51:58.034039 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:51:58.073553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 8 19:51:58.523521 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:51:59.068188 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:51:59.068188 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:51:59.072377 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:51:59.072377 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:51:59.072377 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:51:59.072377 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:51:59.072377 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:51:59.072377 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:51:59.072377 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:51:59.072377 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:51:59.124153 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:51:59.164644 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:51:59.164644 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:51:59.164644 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:51:59.164644 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:51:59.164644 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:51:59.164644 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:51:59.164644 ignition[958]: INFO : files: files passed Oct 8 19:51:59.164644 ignition[958]: INFO : Ignition finished successfully Oct 8 19:51:59.133949 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:51:59.176116 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:51:59.180652 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:51:59.201947 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:51:59.203109 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:51:59.208015 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:51:59.212423 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:51:59.212423 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:51:59.216085 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:51:59.219644 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:51:59.280668 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:51:59.286899 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:51:59.315167 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:51:59.371877 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:51:59.374519 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:51:59.376530 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:51:59.378513 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:51:59.380768 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:51:59.398817 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:51:59.478922 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:51:59.492287 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:51:59.494829 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:51:59.497252 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:51:59.499207 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:51:59.500403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:51:59.503556 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:51:59.505658 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:51:59.507560 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:51:59.509921 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:51:59.546536 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:51:59.548851 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:51:59.551010 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:51:59.553560 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:51:59.555676 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:51:59.557735 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:51:59.559371 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:51:59.560428 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:51:59.562746 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:51:59.564981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:51:59.567367 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:51:59.568344 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:51:59.602312 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:51:59.603391 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:51:59.605708 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:51:59.606840 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:51:59.609288 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:51:59.611092 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:51:59.612238 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:51:59.660321 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:51:59.662215 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:51:59.664122 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:51:59.664994 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:51:59.666974 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:51:59.667870 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:51:59.669943 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:51:59.671145 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:51:59.673626 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:51:59.674610 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:51:59.710984 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:51:59.783242 ignition[1013]: INFO : Ignition 2.19.0 Oct 8 19:51:59.783242 ignition[1013]: INFO : Stage: umount Oct 8 19:51:59.783242 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:59.783242 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:59.783242 ignition[1013]: INFO : umount: umount passed Oct 8 19:51:59.783242 ignition[1013]: INFO : Ignition finished successfully Oct 8 19:51:59.789675 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:51:59.791555 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:51:59.792750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:51:59.795374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:51:59.796576 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:51:59.831307 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:51:59.832547 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:51:59.836505 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:51:59.837560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:51:59.840883 systemd[1]: Stopped target network.target - Network. Oct 8 19:51:59.897885 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:51:59.898964 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:51:59.901261 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:51:59.902317 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:51:59.904533 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:51:59.905622 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:51:59.907853 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:51:59.907922 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:51:59.911555 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:51:59.965419 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:51:59.968688 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:51:59.969785 systemd-networkd[773]: eth0: DHCPv6 lease lost Oct 8 19:51:59.970920 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:51:59.972119 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:51:59.974399 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:51:59.975702 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:51:59.978190 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:51:59.979271 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:51:59.984440 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:51:59.984502 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:51:59.987849 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:51:59.987912 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:51:59.997934 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:51:59.998445 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:51:59.998529 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:51:59.999024 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:51:59.999086 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:51:59.999321 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:51:59.999381 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:51:59.999669 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:51:59.999748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:52:00.000412 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:52:00.020940 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:52:00.021124 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:52:00.037740 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:52:00.038013 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:52:00.039803 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:52:00.039875 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:52:00.042553 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:52:00.042614 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:52:00.043027 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:52:00.043094 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:52:00.043743 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:52:00.043827 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:52:00.044540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:52:00.044605 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:52:00.077157 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:52:00.080285 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:52:00.080397 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:52:00.081291 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 19:52:00.081369 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:52:00.084804 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:52:00.084893 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:52:00.111265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:52:00.111362 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:52:00.131367 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:52:00.131533 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:52:00.133917 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:52:00.136932 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:52:00.152410 systemd[1]: Switching root. Oct 8 19:52:00.185268 systemd-journald[193]: Journal stopped Oct 8 19:52:02.115264 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 8 19:52:02.115346 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:52:02.115365 kernel: SELinux: policy capability open_perms=1 Oct 8 19:52:02.115377 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:52:02.115389 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:52:02.115407 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:52:02.115419 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:52:02.115431 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:52:02.115442 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:52:02.115458 kernel: audit: type=1403 audit(1728417120.958:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:52:02.115470 systemd[1]: Successfully loaded SELinux policy in 47.170ms. Oct 8 19:52:02.115494 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.069ms. Oct 8 19:52:02.115508 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:52:02.115520 systemd[1]: Detected virtualization kvm. Oct 8 19:52:02.115533 systemd[1]: Detected architecture x86-64. Oct 8 19:52:02.115545 systemd[1]: Detected first boot. Oct 8 19:52:02.115558 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:52:02.115570 zram_generator::config[1058]: No configuration found. Oct 8 19:52:02.115585 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:52:02.115598 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:52:02.115610 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:52:02.115622 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:52:02.115635 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:52:02.115648 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:52:02.115660 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:52:02.115672 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:52:02.115688 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:52:02.115700 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:52:02.115738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:52:02.115750 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:52:02.115762 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:52:02.115775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:52:02.115787 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:52:02.115800 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:52:02.115816 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:52:02.115829 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:52:02.115841 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:52:02.115853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:52:02.115866 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:52:02.115878 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:52:02.115890 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:52:02.115902 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:52:02.115918 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:52:02.115933 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:52:02.115948 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:52:02.115960 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:52:02.115972 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:52:02.115986 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:52:02.115998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:52:02.116010 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:52:02.116022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:52:02.116034 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:52:02.116049 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:52:02.116061 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:52:02.116073 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:52:02.116085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:52:02.116097 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:52:02.116110 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:52:02.116123 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:52:02.116135 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:52:02.116150 systemd[1]: Reached target machines.target - Containers. Oct 8 19:52:02.116162 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:52:02.116175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:52:02.116187 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:52:02.116199 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:52:02.116211 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:52:02.116223 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:52:02.116235 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:52:02.116251 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:52:02.116263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:52:02.116278 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:52:02.116292 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:52:02.116305 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:52:02.116317 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:52:02.116329 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:52:02.116341 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:52:02.116352 kernel: loop: module loaded Oct 8 19:52:02.116367 kernel: fuse: init (API version 7.39) Oct 8 19:52:02.116380 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:52:02.116392 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:52:02.116424 systemd-journald[1121]: Collecting audit messages is disabled. Oct 8 19:52:02.116447 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:52:02.116459 systemd-journald[1121]: Journal started Oct 8 19:52:02.116484 systemd-journald[1121]: Runtime Journal (/run/log/journal/de3de2f3b8b64e3c9ba8b1f66ba9b3e8) is 6.0M, max 48.3M, 42.2M free. Oct 8 19:52:01.744767 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:52:01.773440 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:52:01.774074 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:52:01.774611 systemd[1]: systemd-journald.service: Consumed 2.000s CPU time. Oct 8 19:52:02.131949 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:52:02.133840 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:52:02.133873 systemd[1]: Stopped verity-setup.service. Oct 8 19:52:02.136734 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:52:02.144082 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:52:02.143675 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:52:02.144932 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:52:02.146273 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:52:02.148480 kernel: ACPI: bus type drm_connector registered Oct 8 19:52:02.148087 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:52:02.149316 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:52:02.150690 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:52:02.152025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:52:02.153579 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:52:02.153854 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:52:02.155411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:52:02.155586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:52:02.157026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:52:02.157203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:52:02.159297 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:52:02.159547 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:52:02.161066 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:52:02.161243 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:52:02.162613 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:52:02.162838 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:52:02.164645 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:52:02.166267 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:52:02.168079 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:52:02.186148 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:52:02.197884 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:52:02.208979 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:52:02.210537 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:52:02.210582 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:52:02.213221 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:52:02.216248 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:52:02.221125 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:52:02.222668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:52:02.225643 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:52:02.233339 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:52:02.234850 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:52:02.237500 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:52:02.242084 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:52:02.244554 systemd-journald[1121]: Time spent on flushing to /var/log/journal/de3de2f3b8b64e3c9ba8b1f66ba9b3e8 is 22.776ms for 993 entries. Oct 8 19:52:02.244554 systemd-journald[1121]: System Journal (/var/log/journal/de3de2f3b8b64e3c9ba8b1f66ba9b3e8) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:52:02.577045 systemd-journald[1121]: Received client request to flush runtime journal. Oct 8 19:52:02.577170 kernel: loop0: detected capacity change from 0 to 140768 Oct 8 19:52:02.577220 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:52:02.577249 kernel: loop1: detected capacity change from 0 to 210664 Oct 8 19:52:02.244686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:52:02.259618 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:52:02.269890 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:52:02.273896 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:52:02.275661 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:52:02.277171 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:52:02.278879 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:52:02.287804 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:52:02.315363 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:52:02.391935 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:52:02.450794 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Oct 8 19:52:02.450814 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Oct 8 19:52:02.458596 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:52:02.546348 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:52:02.548434 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:52:02.560117 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:52:02.581044 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:52:02.760599 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:52:02.777024 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:52:02.853414 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:52:02.857748 kernel: loop2: detected capacity change from 0 to 142488 Oct 8 19:52:02.858907 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:52:02.863371 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:52:02.874078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:52:02.905649 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Oct 8 19:52:02.905699 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Oct 8 19:52:02.923643 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:52:02.931835 kernel: loop3: detected capacity change from 0 to 140768 Oct 8 19:52:02.966234 kernel: loop4: detected capacity change from 0 to 210664 Oct 8 19:52:03.010656 kernel: loop5: detected capacity change from 0 to 142488 Oct 8 19:52:02.989797 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:52:02.990437 (sd-merge)[1202]: Merged extensions into '/usr'. Oct 8 19:52:03.043808 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:52:03.043833 systemd[1]: Reloading... Oct 8 19:52:03.125760 zram_generator::config[1227]: No configuration found. Oct 8 19:52:03.249305 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:52:03.274739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:52:03.327680 systemd[1]: Reloading finished in 283 ms. Oct 8 19:52:03.364525 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:52:03.411098 systemd[1]: Starting ensure-sysext.service... Oct 8 19:52:03.413959 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:52:03.423485 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:52:03.423503 systemd[1]: Reloading... Oct 8 19:52:03.448356 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:52:03.449192 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:52:03.450307 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:52:03.452563 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Oct 8 19:52:03.452739 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Oct 8 19:52:03.456172 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:52:03.456183 systemd-tmpfiles[1265]: Skipping /boot Oct 8 19:52:03.469576 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:52:03.469660 systemd-tmpfiles[1265]: Skipping /boot Oct 8 19:52:03.503910 zram_generator::config[1296]: No configuration found. Oct 8 19:52:03.590500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:52:03.643045 systemd[1]: Reloading finished in 219 ms. Oct 8 19:52:03.670832 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:52:03.687411 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:52:03.699280 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:52:03.764096 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:52:03.775500 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:52:03.780096 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:52:03.785822 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:52:03.790412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:52:03.790618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:52:03.792238 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:52:03.798482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:52:03.802455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:52:03.803979 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:52:03.808038 augenrules[1354]: No rules Oct 8 19:52:03.808285 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:52:03.809604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:52:03.810904 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:52:03.846324 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:52:03.846761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:52:03.848553 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:52:03.850558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:52:03.850813 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:52:03.853022 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:52:03.853253 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:52:03.864506 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:52:03.864792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:52:03.878220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:52:03.881780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:52:03.884749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:52:03.885959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:52:03.886096 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:52:03.887108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:52:03.888889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:52:03.889071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:52:03.892996 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:52:03.893203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:52:03.945993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:52:03.946256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:52:03.952686 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:52:03.953038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:52:03.966999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:52:03.969779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:52:03.973403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:52:03.978871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:52:04.027394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:52:04.027799 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:52:04.029848 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:52:04.032131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:52:04.032474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:52:04.034461 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:52:04.034671 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:52:04.036798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:52:04.037029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:52:04.038951 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:52:04.039140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:52:04.044309 systemd[1]: Finished ensure-sysext.service. Oct 8 19:52:04.050209 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:52:04.050285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:52:04.062947 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:52:04.194231 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:52:04.196266 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:52:04.197925 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:52:04.199113 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:52:04.206853 systemd-resolved[1347]: Positive Trust Anchors: Oct 8 19:52:04.206872 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:52:04.206902 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:52:04.210737 systemd-resolved[1347]: Defaulting to hostname 'linux'. Oct 8 19:52:04.212385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:52:04.213657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:52:04.264358 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:52:04.329982 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:52:04.332786 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:52:04.349160 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:52:04.358812 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Oct 8 19:52:04.376864 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:52:04.484829 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:52:04.538796 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 19:52:04.605771 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1400) Oct 8 19:52:04.609673 systemd-networkd[1398]: lo: Link UP Oct 8 19:52:04.609684 systemd-networkd[1398]: lo: Gained carrier Oct 8 19:52:04.612495 systemd-networkd[1398]: Enumeration completed Oct 8 19:52:04.612604 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:52:04.612944 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:52:04.612948 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:52:04.615227 systemd-networkd[1398]: eth0: Link UP Oct 8 19:52:04.615231 systemd-networkd[1398]: eth0: Gained carrier Oct 8 19:52:04.615244 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:52:04.615749 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1400) Oct 8 19:52:04.615729 systemd[1]: Reached target network.target - Network. Oct 8 19:52:04.628783 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 8 19:52:04.642987 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:52:04.648810 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:52:04.649828 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Oct 8 19:52:05.316659 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:52:05.316700 systemd-timesyncd[1387]: Initial clock synchronization to Tue 2024-10-08 19:52:05.316558 UTC. Oct 8 19:52:05.316735 systemd-resolved[1347]: Clock change detected. Flushing caches. Oct 8 19:52:05.395575 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1407) Oct 8 19:52:05.399361 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:52:05.415963 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:52:05.417557 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Oct 8 19:52:05.433797 kernel: ACPI: button: Power Button [PWRF] Oct 8 19:52:05.437827 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:52:05.493562 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 8 19:52:05.493865 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 19:52:05.495806 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 19:52:05.496163 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 19:52:05.500753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:52:05.503573 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:52:05.507987 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:52:05.513726 kernel: kvm_amd: TSC scaling supported Oct 8 19:52:05.513766 kernel: kvm_amd: Nested Virtualization enabled Oct 8 19:52:05.513779 kernel: kvm_amd: Nested Paging enabled Oct 8 19:52:05.513815 kernel: kvm_amd: LBR virtualization supported Oct 8 19:52:05.514797 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 8 19:52:05.514813 kernel: kvm_amd: Virtual GIF supported Oct 8 19:52:05.595591 kernel: EDAC MC: Ver: 3.0.0 Oct 8 19:52:05.635255 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:52:05.692679 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:52:05.705825 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:52:05.715731 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:52:05.746402 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:52:05.790719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:52:05.792280 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:52:05.793736 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:52:05.795192 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:52:05.796856 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:52:05.798206 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:52:05.799515 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:52:05.800831 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:52:05.800872 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:52:05.801822 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:52:05.803800 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:52:05.807245 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:52:05.816937 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:52:05.820001 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:52:05.821864 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:52:05.823214 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:52:05.824400 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:52:05.825637 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:52:05.825675 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:52:05.827016 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:52:05.829579 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:52:05.834902 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:52:05.837834 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:52:05.839037 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:52:05.842554 jq[1446]: false Oct 8 19:52:05.842745 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:52:05.844940 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:52:05.850663 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:52:05.853021 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:52:05.937824 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:52:05.938520 dbus-daemon[1445]: [system] SELinux support is enabled Oct 8 19:52:05.943818 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:52:05.945648 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:52:05.946298 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:52:05.948272 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:52:05.952097 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:52:05.954409 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:52:05.958161 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:52:06.068896 extend-filesystems[1447]: Found loop3 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found loop4 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found loop5 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found sr0 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found vda Oct 8 19:52:06.068896 extend-filesystems[1447]: Found vda1 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found vda2 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found vda3 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found usr Oct 8 19:52:06.068896 extend-filesystems[1447]: Found vda4 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found vda6 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found vda7 Oct 8 19:52:06.068896 extend-filesystems[1447]: Found vda9 Oct 8 19:52:06.068896 extend-filesystems[1447]: Checking size of /dev/vda9 Oct 8 19:52:06.091171 jq[1462]: true Oct 8 19:52:06.094690 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:52:06.071126 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:52:06.095124 update_engine[1459]: I20241008 19:52:06.094677 1459 main.cc:92] Flatcar Update Engine starting Oct 8 19:52:06.071349 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:52:06.071730 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:52:06.071933 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:52:06.077048 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:52:06.077339 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:52:06.096683 update_engine[1459]: I20241008 19:52:06.096447 1459 update_check_scheduler.cc:74] Next update check in 4m17s Oct 8 19:52:06.098568 jq[1471]: true Oct 8 19:52:06.106245 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:52:06.110922 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:52:06.110991 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:52:06.112605 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:52:06.112650 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:52:06.113960 tar[1467]: linux-amd64/helm Oct 8 19:52:06.114774 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:52:06.125737 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:52:06.127409 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:52:06.130664 extend-filesystems[1447]: Resized partition /dev/vda9 Oct 8 19:52:06.135394 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (Power Button) Oct 8 19:52:06.136626 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 19:52:06.139375 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:52:06.136867 systemd-logind[1458]: New seat seat0. Oct 8 19:52:06.240975 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:52:06.242088 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:52:06.255556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1401) Oct 8 19:52:06.273704 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:52:06.273975 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:52:06.402771 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:52:06.451725 systemd-networkd[1398]: eth0: Gained IPv6LL Oct 8 19:52:06.455006 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:52:06.544288 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:52:06.551862 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:52:06.564786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:06.570860 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:52:06.588122 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:52:06.703187 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:52:06.703440 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:52:06.725802 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:52:06.756900 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:52:06.759251 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:52:06.773894 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:52:06.774403 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:52:06.781025 tar[1467]: linux-amd64/LICENSE Oct 8 19:52:06.781183 tar[1467]: linux-amd64/README.md Oct 8 19:52:06.799078 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:52:07.012034 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:52:07.387994 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:52:07.746579 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:52:09.325721 containerd[1474]: time="2024-10-08T19:52:09.325566990Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:52:09.333793 extend-filesystems[1496]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:52:09.333793 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:52:09.333793 extend-filesystems[1496]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:52:09.338863 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Oct 8 19:52:09.341619 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:52:09.342016 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:52:09.353800 containerd[1474]: time="2024-10-08T19:52:09.353719516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:52:09.356158 containerd[1474]: time="2024-10-08T19:52:09.356084491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:52:09.356158 containerd[1474]: time="2024-10-08T19:52:09.356126099Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:52:09.356158 containerd[1474]: time="2024-10-08T19:52:09.356147560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:52:09.356444 containerd[1474]: time="2024-10-08T19:52:09.356409691Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:52:09.356444 containerd[1474]: time="2024-10-08T19:52:09.356440800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:52:09.356585 containerd[1474]: time="2024-10-08T19:52:09.356547700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:52:09.356628 containerd[1474]: time="2024-10-08T19:52:09.356581203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:52:09.356654 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:52:09.356975 containerd[1474]: time="2024-10-08T19:52:09.356823828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:52:09.356975 containerd[1474]: time="2024-10-08T19:52:09.356849626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:52:09.356975 containerd[1474]: time="2024-10-08T19:52:09.356865716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:52:09.356975 containerd[1474]: time="2024-10-08T19:52:09.356876457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:52:09.357071 containerd[1474]: time="2024-10-08T19:52:09.357033060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:52:09.357360 containerd[1474]: time="2024-10-08T19:52:09.357318115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:52:09.357494 containerd[1474]: time="2024-10-08T19:52:09.357465712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:52:09.357494 containerd[1474]: time="2024-10-08T19:52:09.357485940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:52:09.357651 containerd[1474]: time="2024-10-08T19:52:09.357622385Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:52:09.357765 containerd[1474]: time="2024-10-08T19:52:09.357707174Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:52:09.360476 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:52:09.362988 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:52:09.366361 containerd[1474]: time="2024-10-08T19:52:09.366295290Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:52:09.366453 containerd[1474]: time="2024-10-08T19:52:09.366391701Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:52:09.366491 containerd[1474]: time="2024-10-08T19:52:09.366454028Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:52:09.366491 containerd[1474]: time="2024-10-08T19:52:09.366481810Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:52:09.366648 containerd[1474]: time="2024-10-08T19:52:09.366562141Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:52:09.366846 containerd[1474]: time="2024-10-08T19:52:09.366810577Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:52:09.367264 containerd[1474]: time="2024-10-08T19:52:09.367222649Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:52:09.368169 containerd[1474]: time="2024-10-08T19:52:09.368139699Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:52:09.368242 containerd[1474]: time="2024-10-08T19:52:09.368170277Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:52:09.368242 containerd[1474]: time="2024-10-08T19:52:09.368227253Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:52:09.368394 containerd[1474]: time="2024-10-08T19:52:09.368262189Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:52:09.368394 containerd[1474]: time="2024-10-08T19:52:09.368306432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:52:09.368394 containerd[1474]: time="2024-10-08T19:52:09.368367316Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:52:09.368394 containerd[1474]: time="2024-10-08T19:52:09.368389738Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:52:09.368498 containerd[1474]: time="2024-10-08T19:52:09.368408213Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:52:09.368498 containerd[1474]: time="2024-10-08T19:52:09.368425455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:52:09.368498 containerd[1474]: time="2024-10-08T19:52:09.368441876Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:52:09.368498 containerd[1474]: time="2024-10-08T19:52:09.368458988Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:52:09.368498 containerd[1474]: time="2024-10-08T19:52:09.368485037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368671 containerd[1474]: time="2024-10-08T19:52:09.368503211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368671 containerd[1474]: time="2024-10-08T19:52:09.368520674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368671 containerd[1474]: time="2024-10-08T19:52:09.368573763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368671 containerd[1474]: time="2024-10-08T19:52:09.368608047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368671 containerd[1474]: time="2024-10-08T19:52:09.368629648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368825 containerd[1474]: time="2024-10-08T19:52:09.368670875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368825 containerd[1474]: time="2024-10-08T19:52:09.368692977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368825 containerd[1474]: time="2024-10-08T19:52:09.368710820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368825 containerd[1474]: time="2024-10-08T19:52:09.368732541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368825 containerd[1474]: time="2024-10-08T19:52:09.368749473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368825 containerd[1474]: time="2024-10-08T19:52:09.368767767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368825 containerd[1474]: time="2024-10-08T19:52:09.368784288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.368825 containerd[1474]: time="2024-10-08T19:52:09.368805147Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:52:09.369136 containerd[1474]: time="2024-10-08T19:52:09.368834172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.369136 containerd[1474]: time="2024-10-08T19:52:09.368866392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.369136 containerd[1474]: time="2024-10-08T19:52:09.368884005Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:52:09.369136 containerd[1474]: time="2024-10-08T19:52:09.368948155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:52:09.369136 containerd[1474]: time="2024-10-08T19:52:09.368997989Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:52:09.369136 containerd[1474]: time="2024-10-08T19:52:09.369018758Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:52:09.369136 containerd[1474]: time="2024-10-08T19:52:09.369036972Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:52:09.369136 containerd[1474]: time="2024-10-08T19:52:09.369051760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.369374 containerd[1474]: time="2024-10-08T19:52:09.369147730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:52:09.369374 containerd[1474]: time="2024-10-08T19:52:09.369178297Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:52:09.369374 containerd[1474]: time="2024-10-08T19:52:09.369193445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:52:09.369787 containerd[1474]: time="2024-10-08T19:52:09.369694705Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:52:09.369787 containerd[1474]: time="2024-10-08T19:52:09.369780356Z" level=info msg="Connect containerd service" Oct 8 19:52:09.370036 containerd[1474]: time="2024-10-08T19:52:09.369832794Z" level=info msg="using legacy CRI server" Oct 8 19:52:09.370036 containerd[1474]: time="2024-10-08T19:52:09.369842723Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:52:09.370036 containerd[1474]: time="2024-10-08T19:52:09.369995810Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:52:09.371186 containerd[1474]: time="2024-10-08T19:52:09.371147249Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:52:09.371475 containerd[1474]: time="2024-10-08T19:52:09.371361832Z" level=info msg="Start subscribing containerd event" Oct 8 19:52:09.371788 containerd[1474]: time="2024-10-08T19:52:09.371754859Z" level=info msg="Start recovering state" Oct 8 19:52:09.372036 containerd[1474]: time="2024-10-08T19:52:09.371769496Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:52:09.372036 containerd[1474]: time="2024-10-08T19:52:09.372031868Z" level=info msg="Start event monitor" Oct 8 19:52:09.372118 containerd[1474]: time="2024-10-08T19:52:09.372052227Z" level=info msg="Start snapshots syncer" Oct 8 19:52:09.372118 containerd[1474]: time="2024-10-08T19:52:09.372067395Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:52:09.372118 containerd[1474]: time="2024-10-08T19:52:09.372078396Z" level=info msg="Start streaming server" Oct 8 19:52:09.372118 containerd[1474]: time="2024-10-08T19:52:09.372100728Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:52:09.373196 containerd[1474]: time="2024-10-08T19:52:09.372224059Z" level=info msg="containerd successfully booted in 1.367924s" Oct 8 19:52:09.372331 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:52:10.651905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:10.670735 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:52:10.671702 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:52:10.673186 systemd[1]: Startup finished in 1.378s (kernel) + 9.205s (initrd) + 9.094s (userspace) = 19.678s. Oct 8 19:52:11.689501 kubelet[1557]: E1008 19:52:11.689353 1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:52:11.694738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:52:11.695057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:11.695548 systemd[1]: kubelet.service: Consumed 1.702s CPU time. Oct 8 19:52:15.806415 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:52:15.807794 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:42044.service - OpenSSH per-connection server daemon (10.0.0.1:42044). Oct 8 19:52:15.862354 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 42044 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:52:15.864759 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:15.874244 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:52:15.882809 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:52:15.884747 systemd-logind[1458]: New session 1 of user core. Oct 8 19:52:15.899824 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:52:15.902932 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:52:15.912135 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:52:16.015929 systemd[1575]: Queued start job for default target default.target. Oct 8 19:52:16.024992 systemd[1575]: Created slice app.slice - User Application Slice. Oct 8 19:52:16.025021 systemd[1575]: Reached target paths.target - Paths. Oct 8 19:52:16.025035 systemd[1575]: Reached target timers.target - Timers. Oct 8 19:52:16.026752 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:52:16.040566 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:52:16.040707 systemd[1575]: Reached target sockets.target - Sockets. Oct 8 19:52:16.040727 systemd[1575]: Reached target basic.target - Basic System. Oct 8 19:52:16.040768 systemd[1575]: Reached target default.target - Main User Target. Oct 8 19:52:16.040805 systemd[1575]: Startup finished in 121ms. Oct 8 19:52:16.041380 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:52:16.043206 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:52:16.106072 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:42054.service - OpenSSH per-connection server daemon (10.0.0.1:42054). Oct 8 19:52:16.169302 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 42054 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:52:16.171682 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:16.177704 systemd-logind[1458]: New session 2 of user core. Oct 8 19:52:16.192735 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:52:16.253630 sshd[1586]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:16.267832 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:42054.service: Deactivated successfully. Oct 8 19:52:16.269971 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:52:16.271971 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:52:16.284904 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:42058.service - OpenSSH per-connection server daemon (10.0.0.1:42058). Oct 8 19:52:16.286169 systemd-logind[1458]: Removed session 2. Oct 8 19:52:16.320770 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 42058 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:52:16.323160 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:16.328761 systemd-logind[1458]: New session 3 of user core. Oct 8 19:52:16.342862 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:52:16.397048 sshd[1593]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:16.410129 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:42058.service: Deactivated successfully. Oct 8 19:52:16.412394 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:52:16.414296 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:52:16.415894 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:42060.service - OpenSSH per-connection server daemon (10.0.0.1:42060). Oct 8 19:52:16.416764 systemd-logind[1458]: Removed session 3. Oct 8 19:52:16.453102 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 42060 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:52:16.455608 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:16.460775 systemd-logind[1458]: New session 4 of user core. Oct 8 19:52:16.471692 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:52:16.532723 sshd[1600]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:16.552358 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:42060.service: Deactivated successfully. Oct 8 19:52:16.554991 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:52:16.557157 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:52:16.567849 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:42076.service - OpenSSH per-connection server daemon (10.0.0.1:42076). Oct 8 19:52:16.569020 systemd-logind[1458]: Removed session 4. Oct 8 19:52:16.596792 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 42076 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:52:16.598831 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:16.603816 systemd-logind[1458]: New session 5 of user core. Oct 8 19:52:16.612665 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:52:16.678877 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:52:16.679285 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:52:16.705520 sudo[1610]: pam_unix(sudo:session): session closed for user root Oct 8 19:52:16.708558 sshd[1607]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:16.725903 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:42076.service: Deactivated successfully. Oct 8 19:52:16.728059 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:52:16.729925 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:52:16.742953 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:42088.service - OpenSSH per-connection server daemon (10.0.0.1:42088). Oct 8 19:52:16.743966 systemd-logind[1458]: Removed session 5. Oct 8 19:52:16.773359 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 42088 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:52:16.775641 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:16.780976 systemd-logind[1458]: New session 6 of user core. Oct 8 19:52:16.790703 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:52:16.848486 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:52:16.848929 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:52:16.853733 sudo[1619]: pam_unix(sudo:session): session closed for user root Oct 8 19:52:16.861074 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:52:16.861463 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:52:16.881064 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:52:16.883611 auditctl[1622]: No rules Oct 8 19:52:16.884215 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:52:16.884570 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:52:16.888397 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:52:16.928442 augenrules[1640]: No rules Oct 8 19:52:16.930981 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:52:16.932642 sudo[1618]: pam_unix(sudo:session): session closed for user root Oct 8 19:52:16.935518 sshd[1615]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:16.944161 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:42088.service: Deactivated successfully. Oct 8 19:52:16.946431 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:52:16.948274 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:52:16.950054 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:42096.service - OpenSSH per-connection server daemon (10.0.0.1:42096). Oct 8 19:52:16.950939 systemd-logind[1458]: Removed session 6. Oct 8 19:52:17.006633 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 42096 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:52:17.008582 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:52:17.013763 systemd-logind[1458]: New session 7 of user core. Oct 8 19:52:17.025745 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:52:17.083978 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:52:17.084362 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:52:17.733764 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:52:17.733888 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:52:18.414891 dockerd[1669]: time="2024-10-08T19:52:18.414812891Z" level=info msg="Starting up" Oct 8 19:52:19.194717 dockerd[1669]: time="2024-10-08T19:52:19.194654542Z" level=info msg="Loading containers: start." Oct 8 19:52:19.693575 kernel: Initializing XFRM netlink socket Oct 8 19:52:19.820366 systemd-networkd[1398]: docker0: Link UP Oct 8 19:52:19.887569 dockerd[1669]: time="2024-10-08T19:52:19.887458817Z" level=info msg="Loading containers: done." Oct 8 19:52:19.921390 dockerd[1669]: time="2024-10-08T19:52:19.920601632Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:52:19.921390 dockerd[1669]: time="2024-10-08T19:52:19.920799814Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:52:19.921390 dockerd[1669]: time="2024-10-08T19:52:19.921008836Z" level=info msg="Daemon has completed initialization" Oct 8 19:52:21.070901 dockerd[1669]: time="2024-10-08T19:52:21.070784876Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:52:21.072302 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:52:21.790610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:52:21.813830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:22.268028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:22.272855 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:52:22.363262 kubelet[1822]: E1008 19:52:22.363191 1822 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:52:22.370831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:52:22.371062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:22.977402 containerd[1474]: time="2024-10-08T19:52:22.977337102Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 8 19:52:31.147192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222898337.mount: Deactivated successfully. Oct 8 19:52:32.540612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:52:32.550775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:32.712704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:32.717841 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:52:32.823064 kubelet[1854]: E1008 19:52:32.822795 1854 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:52:32.827630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:52:32.827900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:35.808789 containerd[1474]: time="2024-10-08T19:52:35.808708233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:35.809725 containerd[1474]: time="2024-10-08T19:52:35.809643347Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754097" Oct 8 19:52:35.811203 containerd[1474]: time="2024-10-08T19:52:35.811155052Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:35.814555 containerd[1474]: time="2024-10-08T19:52:35.814496087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:35.815691 containerd[1474]: time="2024-10-08T19:52:35.815647557Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 12.838256573s" Oct 8 19:52:35.815759 containerd[1474]: time="2024-10-08T19:52:35.815701658Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 8 19:52:35.843961 containerd[1474]: time="2024-10-08T19:52:35.843872669Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 8 19:52:38.774463 containerd[1474]: time="2024-10-08T19:52:38.774349931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:38.777000 containerd[1474]: time="2024-10-08T19:52:38.776925997Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591652" Oct 8 19:52:38.778958 containerd[1474]: time="2024-10-08T19:52:38.778923438Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:38.783386 containerd[1474]: time="2024-10-08T19:52:38.783298654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:38.784337 containerd[1474]: time="2024-10-08T19:52:38.784292289Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 2.940351221s" Oct 8 19:52:38.784387 containerd[1474]: time="2024-10-08T19:52:38.784337417Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 8 19:52:38.815241 containerd[1474]: time="2024-10-08T19:52:38.815183857Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 8 19:52:40.028696 containerd[1474]: time="2024-10-08T19:52:40.028614433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:40.051688 containerd[1474]: time="2024-10-08T19:52:40.051589673Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17779987" Oct 8 19:52:40.064506 containerd[1474]: time="2024-10-08T19:52:40.064429885Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:40.078114 containerd[1474]: time="2024-10-08T19:52:40.078070366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:40.079330 containerd[1474]: time="2024-10-08T19:52:40.079282035Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 1.264038013s" Oct 8 19:52:40.079330 containerd[1474]: time="2024-10-08T19:52:40.079325298Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 8 19:52:40.103172 containerd[1474]: time="2024-10-08T19:52:40.103125162Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 8 19:52:42.648952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241794074.mount: Deactivated successfully. Oct 8 19:52:43.040733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:52:43.046977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:43.234719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:43.251893 (kubelet)[1954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:52:43.643168 kubelet[1954]: E1008 19:52:43.643033 1954 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:52:43.647751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:52:43.647961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:43.812812 containerd[1474]: time="2024-10-08T19:52:43.812713068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:43.814925 containerd[1474]: time="2024-10-08T19:52:43.814827433Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039362" Oct 8 19:52:43.816798 containerd[1474]: time="2024-10-08T19:52:43.816753319Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:43.820425 containerd[1474]: time="2024-10-08T19:52:43.820369318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:43.821298 containerd[1474]: time="2024-10-08T19:52:43.821224665Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 3.718046532s" Oct 8 19:52:43.821298 containerd[1474]: time="2024-10-08T19:52:43.821291542Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 8 19:52:43.852091 containerd[1474]: time="2024-10-08T19:52:43.852035188Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:52:44.826480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621556387.mount: Deactivated successfully. Oct 8 19:52:49.604963 containerd[1474]: time="2024-10-08T19:52:49.604848420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:49.609514 containerd[1474]: time="2024-10-08T19:52:49.609345982Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 8 19:52:49.613940 containerd[1474]: time="2024-10-08T19:52:49.613773689Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:49.623125 containerd[1474]: time="2024-10-08T19:52:49.622997375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:49.624346 containerd[1474]: time="2024-10-08T19:52:49.624261900Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 5.772168039s" Oct 8 19:52:49.624346 containerd[1474]: time="2024-10-08T19:52:49.624337303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 19:52:49.654665 containerd[1474]: time="2024-10-08T19:52:49.654582095Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:52:51.004362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872792931.mount: Deactivated successfully. Oct 8 19:52:51.013433 containerd[1474]: time="2024-10-08T19:52:51.013358757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:51.014590 containerd[1474]: time="2024-10-08T19:52:51.014510022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 8 19:52:51.015957 containerd[1474]: time="2024-10-08T19:52:51.015918296Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:51.018745 containerd[1474]: time="2024-10-08T19:52:51.018650060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:51.019346 containerd[1474]: time="2024-10-08T19:52:51.019295635Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.364665469s" Oct 8 19:52:51.019346 containerd[1474]: time="2024-10-08T19:52:51.019333187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 19:52:51.026757 update_engine[1459]: I20241008 19:52:51.026654 1459 update_attempter.cc:509] Updating boot flags... Oct 8 19:52:51.053367 containerd[1474]: time="2024-10-08T19:52:51.053331388Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 8 19:52:51.101886 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2036) Oct 8 19:52:51.148579 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2037) Oct 8 19:52:51.220607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2037) Oct 8 19:52:53.790517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 19:52:53.799783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:53.956511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:53.963319 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:52:54.025326 kubelet[2052]: E1008 19:52:54.025239 2052 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:52:54.030376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:52:54.030658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:54.788100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966944863.mount: Deactivated successfully. Oct 8 19:52:58.657176 containerd[1474]: time="2024-10-08T19:52:58.657063423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:58.657935 containerd[1474]: time="2024-10-08T19:52:58.657865269Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Oct 8 19:52:58.659370 containerd[1474]: time="2024-10-08T19:52:58.659322954Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:58.664646 containerd[1474]: time="2024-10-08T19:52:58.664574668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:58.666327 containerd[1474]: time="2024-10-08T19:52:58.666282065Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 7.612754223s" Oct 8 19:52:58.666327 containerd[1474]: time="2024-10-08T19:52:58.666321990Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 8 19:53:00.948311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:53:00.959953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:53:00.989082 systemd[1]: Reloading requested from client PID 2188 ('systemctl') (unit session-7.scope)... Oct 8 19:53:00.989104 systemd[1]: Reloading... Oct 8 19:53:01.074557 zram_generator::config[2233]: No configuration found. Oct 8 19:53:01.408194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:53:01.504134 systemd[1]: Reloading finished in 514 ms. Oct 8 19:53:01.564376 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:53:01.564492 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:53:01.564864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:53:01.567016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:53:01.734839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:53:01.740979 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:53:01.786741 kubelet[2275]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:53:01.786741 kubelet[2275]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:53:01.786741 kubelet[2275]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:53:01.787205 kubelet[2275]: I1008 19:53:01.786797 2275 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:53:02.348736 kubelet[2275]: I1008 19:53:02.348670 2275 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:53:02.348736 kubelet[2275]: I1008 19:53:02.348718 2275 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:53:02.348985 kubelet[2275]: I1008 19:53:02.348968 2275 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:53:02.368801 kubelet[2275]: I1008 19:53:02.368742 2275 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:53:02.369552 kubelet[2275]: E1008 19:53:02.369415 2275 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:02.384555 kubelet[2275]: I1008 19:53:02.384457 2275 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:53:02.385869 kubelet[2275]: I1008 19:53:02.385806 2275 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:53:02.386129 kubelet[2275]: I1008 19:53:02.385850 2275 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:53:02.386867 kubelet[2275]: I1008 19:53:02.386812 2275 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:53:02.386867 kubelet[2275]: I1008 19:53:02.386841 2275 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:53:02.387047 kubelet[2275]: I1008 19:53:02.387011 2275 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:53:02.387970 kubelet[2275]: I1008 19:53:02.387924 2275 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:53:02.387970 kubelet[2275]: I1008 19:53:02.387969 2275 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:53:02.388059 kubelet[2275]: I1008 19:53:02.388006 2275 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:53:02.388059 kubelet[2275]: I1008 19:53:02.388037 2275 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:53:02.390975 kubelet[2275]: W1008 19:53:02.390838 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:02.390975 kubelet[2275]: E1008 19:53:02.390929 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:02.392564 kubelet[2275]: W1008 19:53:02.392338 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:02.392564 kubelet[2275]: E1008 19:53:02.392396 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:02.393898 kubelet[2275]: I1008 19:53:02.393874 2275 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:53:02.395461 kubelet[2275]: I1008 19:53:02.395427 2275 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:53:02.395564 kubelet[2275]: W1008 19:53:02.395541 2275 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:53:02.397115 kubelet[2275]: I1008 19:53:02.396684 2275 server.go:1264] "Started kubelet" Oct 8 19:53:02.398111 kubelet[2275]: I1008 19:53:02.398070 2275 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:53:02.400218 kubelet[2275]: I1008 19:53:02.400162 2275 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:53:02.402556 kubelet[2275]: I1008 19:53:02.401621 2275 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:53:02.402556 kubelet[2275]: E1008 19:53:02.401761 2275 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:53:02.402556 kubelet[2275]: I1008 19:53:02.402158 2275 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:53:02.402556 kubelet[2275]: I1008 19:53:02.402297 2275 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:53:02.402556 kubelet[2275]: I1008 19:53:02.402368 2275 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:53:02.402886 kubelet[2275]: I1008 19:53:02.402823 2275 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:53:02.403236 kubelet[2275]: I1008 19:53:02.403215 2275 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:53:02.403355 kubelet[2275]: W1008 19:53:02.402858 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:02.403468 kubelet[2275]: E1008 19:53:02.403452 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:02.403891 kubelet[2275]: E1008 19:53:02.403842 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Oct 8 19:53:02.403951 kubelet[2275]: E1008 19:53:02.403773 2275 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc9245146f1dda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:53:02.396653018 +0000 UTC m=+0.651168371,LastTimestamp:2024-10-08 19:53:02.396653018 +0000 UTC m=+0.651168371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:53:02.404711 kubelet[2275]: I1008 19:53:02.404672 2275 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:53:02.404837 kubelet[2275]: I1008 19:53:02.404792 2275 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:53:02.407347 kubelet[2275]: I1008 19:53:02.407325 2275 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:53:02.420389 kubelet[2275]: I1008 19:53:02.420319 2275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:53:02.422253 kubelet[2275]: I1008 19:53:02.421921 2275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:53:02.422253 kubelet[2275]: I1008 19:53:02.421981 2275 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:53:02.422253 kubelet[2275]: I1008 19:53:02.422016 2275 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:53:02.422253 kubelet[2275]: E1008 19:53:02.422085 2275 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:53:02.425661 kubelet[2275]: W1008 19:53:02.425629 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:02.425744 kubelet[2275]: E1008 19:53:02.425669 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:02.429500 kubelet[2275]: I1008 19:53:02.429455 2275 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:53:02.429500 kubelet[2275]: I1008 19:53:02.429476 2275 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:53:02.429643 kubelet[2275]: I1008 19:53:02.429522 2275 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:53:02.504846 kubelet[2275]: I1008 19:53:02.504800 2275 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:53:02.505182 kubelet[2275]: E1008 19:53:02.505156 2275 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Oct 8 19:53:02.513872 kubelet[2275]: E1008 19:53:02.513776 2275 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc9245146f1dda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:53:02.396653018 +0000 UTC m=+0.651168371,LastTimestamp:2024-10-08 19:53:02.396653018 +0000 UTC m=+0.651168371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:53:02.522958 kubelet[2275]: E1008 19:53:02.522908 2275 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:53:02.604949 kubelet[2275]: E1008 19:53:02.604780 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Oct 8 19:53:02.707781 kubelet[2275]: I1008 19:53:02.707715 2275 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:53:02.708244 kubelet[2275]: E1008 19:53:02.708184 2275 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Oct 8 19:53:02.723372 kubelet[2275]: E1008 19:53:02.723304 2275 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:53:03.006110 kubelet[2275]: E1008 19:53:03.005938 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Oct 8 19:53:03.109939 kubelet[2275]: I1008 19:53:03.109892 2275 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:53:03.110352 kubelet[2275]: E1008 19:53:03.110311 2275 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Oct 8 19:53:03.124447 kubelet[2275]: E1008 19:53:03.124410 2275 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:53:03.274827 kubelet[2275]: W1008 19:53:03.274637 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:03.274827 kubelet[2275]: E1008 19:53:03.274711 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:03.405637 kubelet[2275]: W1008 19:53:03.405547 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:03.405637 kubelet[2275]: E1008 19:53:03.405639 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:03.638995 kubelet[2275]: W1008 19:53:03.638917 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:03.638995 kubelet[2275]: E1008 19:53:03.638980 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:03.794079 kubelet[2275]: W1008 19:53:03.794001 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:03.794079 kubelet[2275]: E1008 19:53:03.794071 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:03.806550 kubelet[2275]: E1008 19:53:03.806488 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Oct 8 19:53:03.912008 kubelet[2275]: I1008 19:53:03.911871 2275 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:53:03.912319 kubelet[2275]: E1008 19:53:03.912281 2275 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Oct 8 19:53:03.925390 kubelet[2275]: E1008 19:53:03.925347 2275 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:53:04.153448 kubelet[2275]: I1008 19:53:04.153393 2275 policy_none.go:49] "None policy: Start" Oct 8 19:53:04.154363 kubelet[2275]: I1008 19:53:04.154341 2275 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:53:04.154363 kubelet[2275]: I1008 19:53:04.154364 2275 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:53:04.406642 kubelet[2275]: E1008 19:53:04.406601 2275 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:05.020094 kubelet[2275]: W1008 19:53:05.020045 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:05.020094 kubelet[2275]: E1008 19:53:05.020090 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:05.266740 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:53:05.280391 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:53:05.284978 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:53:05.291944 kubelet[2275]: W1008 19:53:05.291806 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:05.291944 kubelet[2275]: E1008 19:53:05.291880 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:05.293198 kubelet[2275]: I1008 19:53:05.293140 2275 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:53:05.293650 kubelet[2275]: I1008 19:53:05.293465 2275 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:53:05.293726 kubelet[2275]: I1008 19:53:05.293692 2275 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:53:05.295397 kubelet[2275]: E1008 19:53:05.295355 2275 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:53:05.407971 kubelet[2275]: E1008 19:53:05.407878 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="3.2s" Oct 8 19:53:05.514463 kubelet[2275]: I1008 19:53:05.514398 2275 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:53:05.514824 kubelet[2275]: E1008 19:53:05.514787 2275 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Oct 8 19:53:05.526227 kubelet[2275]: I1008 19:53:05.526133 2275 topology_manager.go:215] "Topology Admit Handler" podUID="3dce5d11dbf8c19899da025f281c6dc4" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:53:05.527834 kubelet[2275]: I1008 19:53:05.527794 2275 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:53:05.528809 kubelet[2275]: I1008 19:53:05.528776 2275 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:53:05.535590 systemd[1]: Created slice kubepods-burstable-pod3dce5d11dbf8c19899da025f281c6dc4.slice - libcontainer container kubepods-burstable-pod3dce5d11dbf8c19899da025f281c6dc4.slice. Oct 8 19:53:05.550033 systemd[1]: Created slice kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice - libcontainer container kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice. Oct 8 19:53:05.554389 systemd[1]: Created slice kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice - libcontainer container kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice. Oct 8 19:53:05.620650 kubelet[2275]: I1008 19:53:05.620574 2275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dce5d11dbf8c19899da025f281c6dc4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3dce5d11dbf8c19899da025f281c6dc4\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:53:05.620650 kubelet[2275]: I1008 19:53:05.620637 2275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:05.620650 kubelet[2275]: I1008 19:53:05.620669 2275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:05.620921 kubelet[2275]: I1008 19:53:05.620693 2275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:05.620921 kubelet[2275]: I1008 19:53:05.620748 2275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:05.620921 kubelet[2275]: I1008 19:53:05.620783 2275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:05.620921 kubelet[2275]: I1008 19:53:05.620813 2275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:53:05.620921 kubelet[2275]: I1008 19:53:05.620832 2275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dce5d11dbf8c19899da025f281c6dc4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3dce5d11dbf8c19899da025f281c6dc4\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:53:05.621069 kubelet[2275]: I1008 19:53:05.620913 2275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dce5d11dbf8c19899da025f281c6dc4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3dce5d11dbf8c19899da025f281c6dc4\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:53:05.847925 kubelet[2275]: E1008 19:53:05.847872 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:05.848772 containerd[1474]: time="2024-10-08T19:53:05.848720803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3dce5d11dbf8c19899da025f281c6dc4,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:05.852978 kubelet[2275]: E1008 19:53:05.852950 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:05.853467 containerd[1474]: time="2024-10-08T19:53:05.853421963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:05.856696 kubelet[2275]: E1008 19:53:05.856664 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:05.857054 containerd[1474]: time="2024-10-08T19:53:05.857024854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:06.393980 kubelet[2275]: W1008 19:53:06.393898 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:06.393980 kubelet[2275]: E1008 19:53:06.393967 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:06.428155 kubelet[2275]: W1008 19:53:06.428082 2275 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:06.428155 kubelet[2275]: E1008 19:53:06.428147 2275 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Oct 8 19:53:06.803626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144445830.mount: Deactivated successfully. Oct 8 19:53:06.897755 containerd[1474]: time="2024-10-08T19:53:06.897602233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:53:06.902688 containerd[1474]: time="2024-10-08T19:53:06.902623864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:53:06.906019 containerd[1474]: time="2024-10-08T19:53:06.905984025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:53:06.907925 containerd[1474]: time="2024-10-08T19:53:06.907863525Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:53:06.912194 containerd[1474]: time="2024-10-08T19:53:06.912106580Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:53:06.913342 containerd[1474]: time="2024-10-08T19:53:06.913254643Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:53:06.915687 containerd[1474]: time="2024-10-08T19:53:06.915606343Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 8 19:53:06.918503 containerd[1474]: time="2024-10-08T19:53:06.918445031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:53:06.920797 containerd[1474]: time="2024-10-08T19:53:06.920744363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.071927759s" Oct 8 19:53:06.921686 containerd[1474]: time="2024-10-08T19:53:06.921640100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.068137715s" Oct 8 19:53:06.925283 containerd[1474]: time="2024-10-08T19:53:06.925234251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.06814648s" Oct 8 19:53:07.150839 containerd[1474]: time="2024-10-08T19:53:07.150612942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:07.151564 containerd[1474]: time="2024-10-08T19:53:07.151486488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:07.152190 containerd[1474]: time="2024-10-08T19:53:07.151592347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:07.152190 containerd[1474]: time="2024-10-08T19:53:07.152121103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:07.153500 containerd[1474]: time="2024-10-08T19:53:07.152207326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:07.153500 containerd[1474]: time="2024-10-08T19:53:07.152304788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:07.153500 containerd[1474]: time="2024-10-08T19:53:07.152335096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:07.153627 containerd[1474]: time="2024-10-08T19:53:07.153469442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:07.178760 containerd[1474]: time="2024-10-08T19:53:07.178507270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:07.178760 containerd[1474]: time="2024-10-08T19:53:07.178589704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:07.178760 containerd[1474]: time="2024-10-08T19:53:07.178600565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:07.178760 containerd[1474]: time="2024-10-08T19:53:07.178703479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:07.207707 systemd[1]: Started cri-containerd-2d149d2ada6d77d1620110da2b32ea069212c9643ba43cf62ed03352453eb09b.scope - libcontainer container 2d149d2ada6d77d1620110da2b32ea069212c9643ba43cf62ed03352453eb09b. Oct 8 19:53:07.209636 systemd[1]: Started cri-containerd-af17149093188b527bc138bf69a13b719c6c1a04a28b9b2cb8d7f6132038e239.scope - libcontainer container af17149093188b527bc138bf69a13b719c6c1a04a28b9b2cb8d7f6132038e239. Oct 8 19:53:07.214514 systemd[1]: Started cri-containerd-4f9e7153a236b7eb8efd3041137967e4a3cc9b0c0a455c26ea25bf3878c27c2b.scope - libcontainer container 4f9e7153a236b7eb8efd3041137967e4a3cc9b0c0a455c26ea25bf3878c27c2b. Oct 8 19:53:07.264994 containerd[1474]: time="2024-10-08T19:53:07.264920675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d149d2ada6d77d1620110da2b32ea069212c9643ba43cf62ed03352453eb09b\"" Oct 8 19:53:07.266198 kubelet[2275]: E1008 19:53:07.266169 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:07.269647 containerd[1474]: time="2024-10-08T19:53:07.269581122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f9e7153a236b7eb8efd3041137967e4a3cc9b0c0a455c26ea25bf3878c27c2b\"" Oct 8 19:53:07.270688 containerd[1474]: time="2024-10-08T19:53:07.270647321Z" level=info msg="CreateContainer within sandbox \"2d149d2ada6d77d1620110da2b32ea069212c9643ba43cf62ed03352453eb09b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:53:07.270766 kubelet[2275]: E1008 19:53:07.270706 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:07.273613 containerd[1474]: time="2024-10-08T19:53:07.273364679Z" level=info msg="CreateContainer within sandbox \"4f9e7153a236b7eb8efd3041137967e4a3cc9b0c0a455c26ea25bf3878c27c2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:53:07.317736 containerd[1474]: time="2024-10-08T19:53:07.317683150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3dce5d11dbf8c19899da025f281c6dc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"af17149093188b527bc138bf69a13b719c6c1a04a28b9b2cb8d7f6132038e239\"" Oct 8 19:53:07.318650 kubelet[2275]: E1008 19:53:07.318599 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:07.321212 containerd[1474]: time="2024-10-08T19:53:07.321176640Z" level=info msg="CreateContainer within sandbox \"af17149093188b527bc138bf69a13b719c6c1a04a28b9b2cb8d7f6132038e239\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:53:07.349764 containerd[1474]: time="2024-10-08T19:53:07.349677890Z" level=info msg="CreateContainer within sandbox \"2d149d2ada6d77d1620110da2b32ea069212c9643ba43cf62ed03352453eb09b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a0c8855dbd2f076f509b77dcbba5475353932598b9de1bf086e8eb11a8c7e7b\"" Oct 8 19:53:07.350634 containerd[1474]: time="2024-10-08T19:53:07.350589107Z" level=info msg="StartContainer for \"0a0c8855dbd2f076f509b77dcbba5475353932598b9de1bf086e8eb11a8c7e7b\"" Oct 8 19:53:07.355331 containerd[1474]: time="2024-10-08T19:53:07.355285683Z" level=info msg="CreateContainer within sandbox \"af17149093188b527bc138bf69a13b719c6c1a04a28b9b2cb8d7f6132038e239\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bdd82e1e67c0f9d2f069cf2248e7d90948657b5654e2fb291ec7d8e874866ec4\"" Oct 8 19:53:07.355834 containerd[1474]: time="2024-10-08T19:53:07.355807896Z" level=info msg="StartContainer for \"bdd82e1e67c0f9d2f069cf2248e7d90948657b5654e2fb291ec7d8e874866ec4\"" Oct 8 19:53:07.357193 containerd[1474]: time="2024-10-08T19:53:07.357129546Z" level=info msg="CreateContainer within sandbox \"4f9e7153a236b7eb8efd3041137967e4a3cc9b0c0a455c26ea25bf3878c27c2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bd687bd87312fc544b8dea95296691d81d263eb50274321951b852b04454b628\"" Oct 8 19:53:07.357816 containerd[1474]: time="2024-10-08T19:53:07.357674602Z" level=info msg="StartContainer for \"bd687bd87312fc544b8dea95296691d81d263eb50274321951b852b04454b628\"" Oct 8 19:53:07.385743 systemd[1]: Started cri-containerd-0a0c8855dbd2f076f509b77dcbba5475353932598b9de1bf086e8eb11a8c7e7b.scope - libcontainer container 0a0c8855dbd2f076f509b77dcbba5475353932598b9de1bf086e8eb11a8c7e7b. Oct 8 19:53:07.395683 systemd[1]: Started cri-containerd-bd687bd87312fc544b8dea95296691d81d263eb50274321951b852b04454b628.scope - libcontainer container bd687bd87312fc544b8dea95296691d81d263eb50274321951b852b04454b628. Oct 8 19:53:07.396984 systemd[1]: Started cri-containerd-bdd82e1e67c0f9d2f069cf2248e7d90948657b5654e2fb291ec7d8e874866ec4.scope - libcontainer container bdd82e1e67c0f9d2f069cf2248e7d90948657b5654e2fb291ec7d8e874866ec4. Oct 8 19:53:07.564646 containerd[1474]: time="2024-10-08T19:53:07.564514003Z" level=info msg="StartContainer for \"bd687bd87312fc544b8dea95296691d81d263eb50274321951b852b04454b628\" returns successfully" Oct 8 19:53:07.564830 containerd[1474]: time="2024-10-08T19:53:07.564794993Z" level=info msg="StartContainer for \"0a0c8855dbd2f076f509b77dcbba5475353932598b9de1bf086e8eb11a8c7e7b\" returns successfully" Oct 8 19:53:07.564997 containerd[1474]: time="2024-10-08T19:53:07.564832694Z" level=info msg="StartContainer for \"bdd82e1e67c0f9d2f069cf2248e7d90948657b5654e2fb291ec7d8e874866ec4\" returns successfully" Oct 8 19:53:08.459848 kubelet[2275]: E1008 19:53:08.459797 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:08.460681 kubelet[2275]: E1008 19:53:08.460662 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:08.462902 kubelet[2275]: E1008 19:53:08.462871 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:08.716573 kubelet[2275]: I1008 19:53:08.716343 2275 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:53:08.954858 kubelet[2275]: E1008 19:53:08.954799 2275 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:53:09.058335 kubelet[2275]: I1008 19:53:09.058258 2275 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:53:09.070872 kubelet[2275]: E1008 19:53:09.070798 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:09.171711 kubelet[2275]: E1008 19:53:09.171636 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:09.271876 kubelet[2275]: E1008 19:53:09.271806 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:09.373115 kubelet[2275]: E1008 19:53:09.372954 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:09.465165 kubelet[2275]: E1008 19:53:09.465129 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:09.465664 kubelet[2275]: E1008 19:53:09.465313 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:09.466102 kubelet[2275]: E1008 19:53:09.466074 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:09.474014 kubelet[2275]: E1008 19:53:09.473961 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:09.574782 kubelet[2275]: E1008 19:53:09.574703 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:09.675588 kubelet[2275]: E1008 19:53:09.675378 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:09.776186 kubelet[2275]: E1008 19:53:09.776105 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:09.876736 kubelet[2275]: E1008 19:53:09.876676 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:09.977741 kubelet[2275]: E1008 19:53:09.977578 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:10.078416 kubelet[2275]: E1008 19:53:10.078320 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:10.179160 kubelet[2275]: E1008 19:53:10.179091 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:10.279947 kubelet[2275]: E1008 19:53:10.279762 2275 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:10.394336 kubelet[2275]: I1008 19:53:10.394286 2275 apiserver.go:52] "Watching apiserver" Oct 8 19:53:10.403100 kubelet[2275]: I1008 19:53:10.403059 2275 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:53:10.692881 kubelet[2275]: E1008 19:53:10.692821 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:11.239648 kubelet[2275]: E1008 19:53:11.239592 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:11.467406 kubelet[2275]: E1008 19:53:11.467369 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:11.467406 kubelet[2275]: E1008 19:53:11.467395 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:12.218865 systemd[1]: Reloading requested from client PID 2552 ('systemctl') (unit session-7.scope)... Oct 8 19:53:12.218891 systemd[1]: Reloading... Oct 8 19:53:12.309573 zram_generator::config[2594]: No configuration found. Oct 8 19:53:12.410781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:53:12.443699 kubelet[2275]: I1008 19:53:12.443638 2275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.443616067 podStartE2EDuration="2.443616067s" podCreationTimestamp="2024-10-08 19:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:53:12.443025125 +0000 UTC m=+10.697540478" watchObservedRunningTime="2024-10-08 19:53:12.443616067 +0000 UTC m=+10.698131410" Oct 8 19:53:12.514839 systemd[1]: Reloading finished in 295 ms. Oct 8 19:53:12.574686 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:53:12.598228 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:53:12.598573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:53:12.598634 systemd[1]: kubelet.service: Consumed 1.253s CPU time, 118.2M memory peak, 0B memory swap peak. Oct 8 19:53:12.607750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:53:12.773769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:53:12.779963 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:53:12.831302 kubelet[2636]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:53:12.831302 kubelet[2636]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:53:12.831302 kubelet[2636]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:53:12.831774 kubelet[2636]: I1008 19:53:12.831332 2636 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:53:12.836368 kubelet[2636]: I1008 19:53:12.836326 2636 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:53:12.836368 kubelet[2636]: I1008 19:53:12.836353 2636 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:53:12.836622 kubelet[2636]: I1008 19:53:12.836603 2636 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:53:12.837909 kubelet[2636]: I1008 19:53:12.837878 2636 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:53:12.840811 kubelet[2636]: I1008 19:53:12.840779 2636 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:53:12.848922 kubelet[2636]: I1008 19:53:12.848872 2636 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:53:12.849178 kubelet[2636]: I1008 19:53:12.849137 2636 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:53:12.849368 kubelet[2636]: I1008 19:53:12.849177 2636 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:53:12.849509 kubelet[2636]: I1008 19:53:12.849391 2636 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:53:12.849509 kubelet[2636]: I1008 19:53:12.849412 2636 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:53:12.849509 kubelet[2636]: I1008 19:53:12.849469 2636 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:53:12.849649 kubelet[2636]: I1008 19:53:12.849581 2636 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:53:12.849649 kubelet[2636]: I1008 19:53:12.849596 2636 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:53:12.849649 kubelet[2636]: I1008 19:53:12.849616 2636 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:53:12.849649 kubelet[2636]: I1008 19:53:12.849634 2636 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:53:12.850392 kubelet[2636]: I1008 19:53:12.850363 2636 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:53:12.850783 kubelet[2636]: I1008 19:53:12.850615 2636 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:53:12.851058 kubelet[2636]: I1008 19:53:12.851035 2636 server.go:1264] "Started kubelet" Oct 8 19:53:12.851686 kubelet[2636]: I1008 19:53:12.851579 2636 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:53:12.851970 kubelet[2636]: I1008 19:53:12.851936 2636 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:53:12.852129 kubelet[2636]: I1008 19:53:12.851985 2636 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:53:12.852850 kubelet[2636]: I1008 19:53:12.852820 2636 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:53:12.853014 kubelet[2636]: I1008 19:53:12.852967 2636 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:53:12.862196 kubelet[2636]: E1008 19:53:12.862124 2636 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:53:12.862196 kubelet[2636]: I1008 19:53:12.862211 2636 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:53:12.863091 kubelet[2636]: I1008 19:53:12.862421 2636 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:53:12.863091 kubelet[2636]: I1008 19:53:12.862714 2636 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:53:12.863566 kubelet[2636]: E1008 19:53:12.863513 2636 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:53:12.866371 kubelet[2636]: I1008 19:53:12.865803 2636 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:53:12.870129 kubelet[2636]: I1008 19:53:12.870100 2636 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:53:12.870129 kubelet[2636]: I1008 19:53:12.870124 2636 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:53:12.872441 kubelet[2636]: I1008 19:53:12.872410 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:53:12.873974 kubelet[2636]: I1008 19:53:12.873949 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:53:12.874016 kubelet[2636]: I1008 19:53:12.873983 2636 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:53:12.874016 kubelet[2636]: I1008 19:53:12.874004 2636 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:53:12.874075 kubelet[2636]: E1008 19:53:12.874048 2636 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:53:12.912852 kubelet[2636]: I1008 19:53:12.912816 2636 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:53:12.912852 kubelet[2636]: I1008 19:53:12.912839 2636 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:53:12.912852 kubelet[2636]: I1008 19:53:12.912863 2636 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:53:12.913083 kubelet[2636]: I1008 19:53:12.913030 2636 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:53:12.913083 kubelet[2636]: I1008 19:53:12.913040 2636 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:53:12.913083 kubelet[2636]: I1008 19:53:12.913058 2636 policy_none.go:49] "None policy: Start" Oct 8 19:53:12.914152 kubelet[2636]: I1008 19:53:12.914119 2636 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:53:12.914152 kubelet[2636]: I1008 19:53:12.914146 2636 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:53:12.914281 kubelet[2636]: I1008 19:53:12.914259 2636 state_mem.go:75] "Updated machine memory state" Oct 8 19:53:12.920129 kubelet[2636]: I1008 19:53:12.920097 2636 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:53:12.920700 kubelet[2636]: I1008 19:53:12.920414 2636 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:53:12.920700 kubelet[2636]: I1008 19:53:12.920544 2636 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:53:12.968396 kubelet[2636]: I1008 19:53:12.968359 2636 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:53:12.974413 kubelet[2636]: I1008 19:53:12.974270 2636 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:53:12.974413 kubelet[2636]: I1008 19:53:12.974404 2636 topology_manager.go:215] "Topology Admit Handler" podUID="3dce5d11dbf8c19899da025f281c6dc4" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:53:12.974612 kubelet[2636]: I1008 19:53:12.974458 2636 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:53:12.977244 kubelet[2636]: I1008 19:53:12.977175 2636 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:53:12.977343 kubelet[2636]: I1008 19:53:12.977285 2636 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:53:12.980442 kubelet[2636]: E1008 19:53:12.980409 2636 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:12.981119 kubelet[2636]: E1008 19:53:12.981095 2636 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:53:13.163496 kubelet[2636]: I1008 19:53:13.163416 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:13.163496 kubelet[2636]: I1008 19:53:13.163474 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:13.163496 kubelet[2636]: I1008 19:53:13.163504 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:13.163775 kubelet[2636]: I1008 19:53:13.163546 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:53:13.163775 kubelet[2636]: I1008 19:53:13.163568 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dce5d11dbf8c19899da025f281c6dc4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3dce5d11dbf8c19899da025f281c6dc4\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:53:13.163775 kubelet[2636]: I1008 19:53:13.163587 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dce5d11dbf8c19899da025f281c6dc4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3dce5d11dbf8c19899da025f281c6dc4\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:53:13.163775 kubelet[2636]: I1008 19:53:13.163605 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dce5d11dbf8c19899da025f281c6dc4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3dce5d11dbf8c19899da025f281c6dc4\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:53:13.163775 kubelet[2636]: I1008 19:53:13.163623 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:13.163914 kubelet[2636]: I1008 19:53:13.163641 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:53:13.281905 kubelet[2636]: E1008 19:53:13.281790 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:13.281905 kubelet[2636]: E1008 19:53:13.281884 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:13.282246 kubelet[2636]: E1008 19:53:13.282027 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:13.850787 kubelet[2636]: I1008 19:53:13.850728 2636 apiserver.go:52] "Watching apiserver" Oct 8 19:53:13.863610 kubelet[2636]: I1008 19:53:13.863430 2636 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:53:13.893330 kubelet[2636]: E1008 19:53:13.893131 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:13.893569 kubelet[2636]: E1008 19:53:13.893445 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:14.168896 kubelet[2636]: E1008 19:53:14.168725 2636 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 8 19:53:14.170061 kubelet[2636]: E1008 19:53:14.169154 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:14.423044 kubelet[2636]: I1008 19:53:14.421580 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.421559634 podStartE2EDuration="2.421559634s" podCreationTimestamp="2024-10-08 19:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:53:14.421309243 +0000 UTC m=+1.634934173" watchObservedRunningTime="2024-10-08 19:53:14.421559634 +0000 UTC m=+1.635184564" Oct 8 19:53:14.900388 kubelet[2636]: E1008 19:53:14.900186 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:14.900948 kubelet[2636]: E1008 19:53:14.900929 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:15.904018 kubelet[2636]: E1008 19:53:15.903821 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:16.604152 kubelet[2636]: E1008 19:53:16.604092 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:18.218248 kubelet[2636]: E1008 19:53:18.218202 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:18.910345 kubelet[2636]: E1008 19:53:18.910296 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:18.966245 sudo[1651]: pam_unix(sudo:session): session closed for user root Oct 8 19:53:18.969271 sshd[1648]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:18.976071 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:42096.service: Deactivated successfully. Oct 8 19:53:18.979079 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:53:18.979356 systemd[1]: session-7.scope: Consumed 5.153s CPU time, 195.1M memory peak, 0B memory swap peak. Oct 8 19:53:18.979935 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:53:18.981392 systemd-logind[1458]: Removed session 7. Oct 8 19:53:19.911610 kubelet[2636]: E1008 19:53:19.911523 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:25.876103 kubelet[2636]: E1008 19:53:25.876015 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:26.608248 kubelet[2636]: E1008 19:53:26.608213 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:27.203805 kubelet[2636]: I1008 19:53:27.203769 2636 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:53:27.206281 containerd[1474]: time="2024-10-08T19:53:27.205185774Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:53:27.206805 kubelet[2636]: I1008 19:53:27.205437 2636 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:53:27.306466 kubelet[2636]: I1008 19:53:27.306386 2636 topology_manager.go:215] "Topology Admit Handler" podUID="6584959f-300b-4d4b-8f73-2c3b1fd3a869" podNamespace="kube-system" podName="kube-proxy-kfltp" Oct 8 19:53:27.323409 systemd[1]: Created slice kubepods-besteffort-pod6584959f_300b_4d4b_8f73_2c3b1fd3a869.slice - libcontainer container kubepods-besteffort-pod6584959f_300b_4d4b_8f73_2c3b1fd3a869.slice. Oct 8 19:53:27.450858 kubelet[2636]: I1008 19:53:27.450773 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6584959f-300b-4d4b-8f73-2c3b1fd3a869-xtables-lock\") pod \"kube-proxy-kfltp\" (UID: \"6584959f-300b-4d4b-8f73-2c3b1fd3a869\") " pod="kube-system/kube-proxy-kfltp" Oct 8 19:53:27.450858 kubelet[2636]: I1008 19:53:27.450839 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbmgv\" (UniqueName: \"kubernetes.io/projected/6584959f-300b-4d4b-8f73-2c3b1fd3a869-kube-api-access-pbmgv\") pod \"kube-proxy-kfltp\" (UID: \"6584959f-300b-4d4b-8f73-2c3b1fd3a869\") " pod="kube-system/kube-proxy-kfltp" Oct 8 19:53:27.450858 kubelet[2636]: I1008 19:53:27.450875 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6584959f-300b-4d4b-8f73-2c3b1fd3a869-kube-proxy\") pod \"kube-proxy-kfltp\" (UID: \"6584959f-300b-4d4b-8f73-2c3b1fd3a869\") " pod="kube-system/kube-proxy-kfltp" Oct 8 19:53:27.451184 kubelet[2636]: I1008 19:53:27.450905 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6584959f-300b-4d4b-8f73-2c3b1fd3a869-lib-modules\") pod \"kube-proxy-kfltp\" (UID: \"6584959f-300b-4d4b-8f73-2c3b1fd3a869\") " pod="kube-system/kube-proxy-kfltp" Oct 8 19:53:27.565579 kubelet[2636]: E1008 19:53:27.564987 2636 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 19:53:27.565579 kubelet[2636]: E1008 19:53:27.565034 2636 projected.go:200] Error preparing data for projected volume kube-api-access-pbmgv for pod kube-system/kube-proxy-kfltp: configmap "kube-root-ca.crt" not found Oct 8 19:53:27.565579 kubelet[2636]: E1008 19:53:27.565121 2636 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6584959f-300b-4d4b-8f73-2c3b1fd3a869-kube-api-access-pbmgv podName:6584959f-300b-4d4b-8f73-2c3b1fd3a869 nodeName:}" failed. No retries permitted until 2024-10-08 19:53:28.065090485 +0000 UTC m=+15.278715415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pbmgv" (UniqueName: "kubernetes.io/projected/6584959f-300b-4d4b-8f73-2c3b1fd3a869-kube-api-access-pbmgv") pod "kube-proxy-kfltp" (UID: "6584959f-300b-4d4b-8f73-2c3b1fd3a869") : configmap "kube-root-ca.crt" not found Oct 8 19:53:27.874599 kubelet[2636]: I1008 19:53:27.874416 2636 topology_manager.go:215] "Topology Admit Handler" podUID="de4ef866-9a7e-4b0b-87ba-53029491ec5a" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-p7g2r" Oct 8 19:53:27.881739 systemd[1]: Created slice kubepods-besteffort-podde4ef866_9a7e_4b0b_87ba_53029491ec5a.slice - libcontainer container kubepods-besteffort-podde4ef866_9a7e_4b0b_87ba_53029491ec5a.slice. Oct 8 19:53:27.954137 kubelet[2636]: I1008 19:53:27.954081 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de4ef866-9a7e-4b0b-87ba-53029491ec5a-var-lib-calico\") pod \"tigera-operator-77f994b5bb-p7g2r\" (UID: \"de4ef866-9a7e-4b0b-87ba-53029491ec5a\") " pod="tigera-operator/tigera-operator-77f994b5bb-p7g2r" Oct 8 19:53:27.954137 kubelet[2636]: I1008 19:53:27.954120 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9cs6\" (UniqueName: \"kubernetes.io/projected/de4ef866-9a7e-4b0b-87ba-53029491ec5a-kube-api-access-z9cs6\") pod \"tigera-operator-77f994b5bb-p7g2r\" (UID: \"de4ef866-9a7e-4b0b-87ba-53029491ec5a\") " pod="tigera-operator/tigera-operator-77f994b5bb-p7g2r" Oct 8 19:53:28.185121 containerd[1474]: time="2024-10-08T19:53:28.184968633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-p7g2r,Uid:de4ef866-9a7e-4b0b-87ba-53029491ec5a,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:53:28.219326 containerd[1474]: time="2024-10-08T19:53:28.218995927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:28.219326 containerd[1474]: time="2024-10-08T19:53:28.219103759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:28.219326 containerd[1474]: time="2024-10-08T19:53:28.219116222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:28.219326 containerd[1474]: time="2024-10-08T19:53:28.219229886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:28.230979 kubelet[2636]: E1008 19:53:28.230954 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:28.232071 containerd[1474]: time="2024-10-08T19:53:28.231738801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kfltp,Uid:6584959f-300b-4d4b-8f73-2c3b1fd3a869,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:28.249747 systemd[1]: Started cri-containerd-33ba87980940bc48c976f152a755bdc506df7ce9ec25a29f0cd76d83b2b74d8c.scope - libcontainer container 33ba87980940bc48c976f152a755bdc506df7ce9ec25a29f0cd76d83b2b74d8c. Oct 8 19:53:28.264361 containerd[1474]: time="2024-10-08T19:53:28.263940889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:28.264361 containerd[1474]: time="2024-10-08T19:53:28.264016861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:28.264361 containerd[1474]: time="2024-10-08T19:53:28.264040345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:28.264361 containerd[1474]: time="2024-10-08T19:53:28.264158256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:28.286737 systemd[1]: Started cri-containerd-141bd978e47692391cd303e668a469acc2dcc406c7fcac7de468ae3eece57b7e.scope - libcontainer container 141bd978e47692391cd303e668a469acc2dcc406c7fcac7de468ae3eece57b7e. Oct 8 19:53:28.297702 containerd[1474]: time="2024-10-08T19:53:28.297662128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-p7g2r,Uid:de4ef866-9a7e-4b0b-87ba-53029491ec5a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"33ba87980940bc48c976f152a755bdc506df7ce9ec25a29f0cd76d83b2b74d8c\"" Oct 8 19:53:28.302421 containerd[1474]: time="2024-10-08T19:53:28.302266693Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:53:28.316718 containerd[1474]: time="2024-10-08T19:53:28.316658332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kfltp,Uid:6584959f-300b-4d4b-8f73-2c3b1fd3a869,Namespace:kube-system,Attempt:0,} returns sandbox id \"141bd978e47692391cd303e668a469acc2dcc406c7fcac7de468ae3eece57b7e\"" Oct 8 19:53:28.317542 kubelet[2636]: E1008 19:53:28.317499 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:28.319598 containerd[1474]: time="2024-10-08T19:53:28.319518322Z" level=info msg="CreateContainer within sandbox \"141bd978e47692391cd303e668a469acc2dcc406c7fcac7de468ae3eece57b7e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:53:28.342658 containerd[1474]: time="2024-10-08T19:53:28.342592542Z" level=info msg="CreateContainer within sandbox \"141bd978e47692391cd303e668a469acc2dcc406c7fcac7de468ae3eece57b7e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c9119600033dd2567fdc22bde686135d9fd6ffe3bfdab406a37e5b22fecf463\"" Oct 8 19:53:28.343357 containerd[1474]: time="2024-10-08T19:53:28.343325439Z" level=info msg="StartContainer for \"8c9119600033dd2567fdc22bde686135d9fd6ffe3bfdab406a37e5b22fecf463\"" Oct 8 19:53:28.378727 systemd[1]: Started cri-containerd-8c9119600033dd2567fdc22bde686135d9fd6ffe3bfdab406a37e5b22fecf463.scope - libcontainer container 8c9119600033dd2567fdc22bde686135d9fd6ffe3bfdab406a37e5b22fecf463. Oct 8 19:53:28.414520 containerd[1474]: time="2024-10-08T19:53:28.414451755Z" level=info msg="StartContainer for \"8c9119600033dd2567fdc22bde686135d9fd6ffe3bfdab406a37e5b22fecf463\" returns successfully" Oct 8 19:53:28.929684 kubelet[2636]: E1008 19:53:28.929592 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:28.940562 kubelet[2636]: I1008 19:53:28.940453 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kfltp" podStartSLOduration=1.940429578 podStartE2EDuration="1.940429578s" podCreationTimestamp="2024-10-08 19:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:53:28.940402216 +0000 UTC m=+16.154027156" watchObservedRunningTime="2024-10-08 19:53:28.940429578 +0000 UTC m=+16.154054518" Oct 8 19:53:30.459935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223078416.mount: Deactivated successfully. Oct 8 19:53:32.009795 containerd[1474]: time="2024-10-08T19:53:32.009694626Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:32.010523 containerd[1474]: time="2024-10-08T19:53:32.010431829Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136545" Oct 8 19:53:32.011875 containerd[1474]: time="2024-10-08T19:53:32.011793435Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:32.014563 containerd[1474]: time="2024-10-08T19:53:32.014479196Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:32.015368 containerd[1474]: time="2024-10-08T19:53:32.015308773Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 3.712878303s" Oct 8 19:53:32.015368 containerd[1474]: time="2024-10-08T19:53:32.015357134Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 19:53:32.021152 containerd[1474]: time="2024-10-08T19:53:32.021097227Z" level=info msg="CreateContainer within sandbox \"33ba87980940bc48c976f152a755bdc506df7ce9ec25a29f0cd76d83b2b74d8c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:53:32.038140 containerd[1474]: time="2024-10-08T19:53:32.038082759Z" level=info msg="CreateContainer within sandbox \"33ba87980940bc48c976f152a755bdc506df7ce9ec25a29f0cd76d83b2b74d8c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0bc2653aba5b1b1b96225a3c9e420f6389a0ec0ffd58ce6c1b9bca48b0afaf76\"" Oct 8 19:53:32.038663 containerd[1474]: time="2024-10-08T19:53:32.038633744Z" level=info msg="StartContainer for \"0bc2653aba5b1b1b96225a3c9e420f6389a0ec0ffd58ce6c1b9bca48b0afaf76\"" Oct 8 19:53:32.077879 systemd[1]: Started cri-containerd-0bc2653aba5b1b1b96225a3c9e420f6389a0ec0ffd58ce6c1b9bca48b0afaf76.scope - libcontainer container 0bc2653aba5b1b1b96225a3c9e420f6389a0ec0ffd58ce6c1b9bca48b0afaf76. Oct 8 19:53:32.111579 containerd[1474]: time="2024-10-08T19:53:32.111457781Z" level=info msg="StartContainer for \"0bc2653aba5b1b1b96225a3c9e420f6389a0ec0ffd58ce6c1b9bca48b0afaf76\" returns successfully" Oct 8 19:53:32.950482 kubelet[2636]: I1008 19:53:32.950294 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-p7g2r" podStartSLOduration=2.230746719 podStartE2EDuration="5.950269545s" podCreationTimestamp="2024-10-08 19:53:27 +0000 UTC" firstStartedPulling="2024-10-08 19:53:28.300035293 +0000 UTC m=+15.513660223" lastFinishedPulling="2024-10-08 19:53:32.019558119 +0000 UTC m=+19.233183049" observedRunningTime="2024-10-08 19:53:32.948476961 +0000 UTC m=+20.162101891" watchObservedRunningTime="2024-10-08 19:53:32.950269545 +0000 UTC m=+20.163894475" Oct 8 19:53:35.639912 kubelet[2636]: I1008 19:53:35.639494 2636 topology_manager.go:215] "Topology Admit Handler" podUID="aaec0c94-2260-437f-b84a-ee70c58b5e60" podNamespace="calico-system" podName="calico-typha-57c45c6cbd-jwmx4" Oct 8 19:53:35.666671 systemd[1]: Created slice kubepods-besteffort-podaaec0c94_2260_437f_b84a_ee70c58b5e60.slice - libcontainer container kubepods-besteffort-podaaec0c94_2260_437f_b84a_ee70c58b5e60.slice. Oct 8 19:53:35.682992 kubelet[2636]: I1008 19:53:35.682792 2636 topology_manager.go:215] "Topology Admit Handler" podUID="df5bebc9-94a8-4ea9-b16c-012e141f4955" podNamespace="calico-system" podName="calico-node-9tpgx" Oct 8 19:53:35.692399 systemd[1]: Created slice kubepods-besteffort-poddf5bebc9_94a8_4ea9_b16c_012e141f4955.slice - libcontainer container kubepods-besteffort-poddf5bebc9_94a8_4ea9_b16c_012e141f4955.slice. Oct 8 19:53:35.707839 kubelet[2636]: I1008 19:53:35.707776 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaec0c94-2260-437f-b84a-ee70c58b5e60-tigera-ca-bundle\") pod \"calico-typha-57c45c6cbd-jwmx4\" (UID: \"aaec0c94-2260-437f-b84a-ee70c58b5e60\") " pod="calico-system/calico-typha-57c45c6cbd-jwmx4" Oct 8 19:53:35.707839 kubelet[2636]: I1008 19:53:35.707833 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-xtables-lock\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.707839 kubelet[2636]: I1008 19:53:35.707852 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-var-lib-calico\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708099 kubelet[2636]: I1008 19:53:35.707867 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-policysync\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708099 kubelet[2636]: I1008 19:53:35.707884 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgjwr\" (UniqueName: \"kubernetes.io/projected/df5bebc9-94a8-4ea9-b16c-012e141f4955-kube-api-access-rgjwr\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708099 kubelet[2636]: I1008 19:53:35.707902 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdp4\" (UniqueName: \"kubernetes.io/projected/aaec0c94-2260-437f-b84a-ee70c58b5e60-kube-api-access-jsdp4\") pod \"calico-typha-57c45c6cbd-jwmx4\" (UID: \"aaec0c94-2260-437f-b84a-ee70c58b5e60\") " pod="calico-system/calico-typha-57c45c6cbd-jwmx4" Oct 8 19:53:35.708099 kubelet[2636]: I1008 19:53:35.707916 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aaec0c94-2260-437f-b84a-ee70c58b5e60-typha-certs\") pod \"calico-typha-57c45c6cbd-jwmx4\" (UID: \"aaec0c94-2260-437f-b84a-ee70c58b5e60\") " pod="calico-system/calico-typha-57c45c6cbd-jwmx4" Oct 8 19:53:35.708099 kubelet[2636]: I1008 19:53:35.707930 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-lib-modules\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708284 kubelet[2636]: I1008 19:53:35.707954 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/df5bebc9-94a8-4ea9-b16c-012e141f4955-node-certs\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708284 kubelet[2636]: I1008 19:53:35.707968 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df5bebc9-94a8-4ea9-b16c-012e141f4955-tigera-ca-bundle\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708284 kubelet[2636]: I1008 19:53:35.707981 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-net-dir\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708284 kubelet[2636]: I1008 19:53:35.707995 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-flexvol-driver-host\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708284 kubelet[2636]: I1008 19:53:35.708010 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-bin-dir\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708517 kubelet[2636]: I1008 19:53:35.708023 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-log-dir\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.708517 kubelet[2636]: I1008 19:53:35.708036 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-var-run-calico\") pod \"calico-node-9tpgx\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " pod="calico-system/calico-node-9tpgx" Oct 8 19:53:35.803559 kubelet[2636]: I1008 19:53:35.803274 2636 topology_manager.go:215] "Topology Admit Handler" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" podNamespace="calico-system" podName="csi-node-driver-h9hb6" Oct 8 19:53:35.803735 kubelet[2636]: E1008 19:53:35.803601 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:35.816300 kubelet[2636]: I1008 19:53:35.816233 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5b65116-575e-4269-8542-d6d284a4cec8-varrun\") pod \"csi-node-driver-h9hb6\" (UID: \"a5b65116-575e-4269-8542-d6d284a4cec8\") " pod="calico-system/csi-node-driver-h9hb6" Oct 8 19:53:35.816463 kubelet[2636]: I1008 19:53:35.816314 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5b65116-575e-4269-8542-d6d284a4cec8-kubelet-dir\") pod \"csi-node-driver-h9hb6\" (UID: \"a5b65116-575e-4269-8542-d6d284a4cec8\") " pod="calico-system/csi-node-driver-h9hb6" Oct 8 19:53:35.816463 kubelet[2636]: I1008 19:53:35.816331 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kr4h\" (UniqueName: \"kubernetes.io/projected/a5b65116-575e-4269-8542-d6d284a4cec8-kube-api-access-6kr4h\") pod \"csi-node-driver-h9hb6\" (UID: \"a5b65116-575e-4269-8542-d6d284a4cec8\") " pod="calico-system/csi-node-driver-h9hb6" Oct 8 19:53:35.816463 kubelet[2636]: I1008 19:53:35.816380 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5b65116-575e-4269-8542-d6d284a4cec8-registration-dir\") pod \"csi-node-driver-h9hb6\" (UID: \"a5b65116-575e-4269-8542-d6d284a4cec8\") " pod="calico-system/csi-node-driver-h9hb6" Oct 8 19:53:35.816463 kubelet[2636]: I1008 19:53:35.816406 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5b65116-575e-4269-8542-d6d284a4cec8-socket-dir\") pod \"csi-node-driver-h9hb6\" (UID: \"a5b65116-575e-4269-8542-d6d284a4cec8\") " pod="calico-system/csi-node-driver-h9hb6" Oct 8 19:53:35.824804 kubelet[2636]: E1008 19:53:35.824728 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.824804 kubelet[2636]: W1008 19:53:35.824794 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.825009 kubelet[2636]: E1008 19:53:35.824845 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.832343 kubelet[2636]: E1008 19:53:35.831784 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.832343 kubelet[2636]: W1008 19:53:35.831818 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.832343 kubelet[2636]: E1008 19:53:35.831846 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.840211 kubelet[2636]: E1008 19:53:35.840174 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.840211 kubelet[2636]: W1008 19:53:35.840201 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.840397 kubelet[2636]: E1008 19:53:35.840258 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.847263 kubelet[2636]: E1008 19:53:35.847213 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.847263 kubelet[2636]: W1008 19:53:35.847255 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.847411 kubelet[2636]: E1008 19:53:35.847279 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.850806 kubelet[2636]: E1008 19:53:35.850773 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.851192 kubelet[2636]: W1008 19:53:35.851007 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.851192 kubelet[2636]: E1008 19:53:35.851047 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.854132 kubelet[2636]: E1008 19:53:35.854109 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.854476 kubelet[2636]: W1008 19:53:35.854298 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.854476 kubelet[2636]: E1008 19:53:35.854327 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.855887 kubelet[2636]: E1008 19:53:35.855790 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.855887 kubelet[2636]: W1008 19:53:35.855804 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.855887 kubelet[2636]: E1008 19:53:35.855819 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.856298 kubelet[2636]: E1008 19:53:35.856252 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.856298 kubelet[2636]: W1008 19:53:35.856283 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.856500 kubelet[2636]: E1008 19:53:35.856371 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.857423 kubelet[2636]: E1008 19:53:35.857341 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.857423 kubelet[2636]: W1008 19:53:35.857377 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.857423 kubelet[2636]: E1008 19:53:35.857391 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.920982 kubelet[2636]: E1008 19:53:35.920790 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.920982 kubelet[2636]: W1008 19:53:35.920819 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.920982 kubelet[2636]: E1008 19:53:35.920844 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.922477 kubelet[2636]: E1008 19:53:35.922421 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.922477 kubelet[2636]: W1008 19:53:35.922442 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.922477 kubelet[2636]: E1008 19:53:35.922471 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.922968 kubelet[2636]: E1008 19:53:35.922942 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.923008 kubelet[2636]: W1008 19:53:35.922969 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.923008 kubelet[2636]: E1008 19:53:35.923000 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.923303 kubelet[2636]: E1008 19:53:35.923289 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.923343 kubelet[2636]: W1008 19:53:35.923305 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.923343 kubelet[2636]: E1008 19:53:35.923324 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.923582 kubelet[2636]: E1008 19:53:35.923566 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.923582 kubelet[2636]: W1008 19:53:35.923580 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.923658 kubelet[2636]: E1008 19:53:35.923597 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.923888 kubelet[2636]: E1008 19:53:35.923873 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.923888 kubelet[2636]: W1008 19:53:35.923885 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.923987 kubelet[2636]: E1008 19:53:35.923899 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.924130 kubelet[2636]: E1008 19:53:35.924106 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.924130 kubelet[2636]: W1008 19:53:35.924117 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.924200 kubelet[2636]: E1008 19:53:35.924178 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.924346 kubelet[2636]: E1008 19:53:35.924328 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.924378 kubelet[2636]: W1008 19:53:35.924346 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.924515 kubelet[2636]: E1008 19:53:35.924383 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.924634 kubelet[2636]: E1008 19:53:35.924590 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.924634 kubelet[2636]: W1008 19:53:35.924602 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.924693 kubelet[2636]: E1008 19:53:35.924632 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.924825 kubelet[2636]: E1008 19:53:35.924809 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.924825 kubelet[2636]: W1008 19:53:35.924822 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.924927 kubelet[2636]: E1008 19:53:35.924853 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.925043 kubelet[2636]: E1008 19:53:35.925008 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.925043 kubelet[2636]: W1008 19:53:35.925022 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.925152 kubelet[2636]: E1008 19:53:35.925124 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.925243 kubelet[2636]: E1008 19:53:35.925223 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.925243 kubelet[2636]: W1008 19:53:35.925236 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.925320 kubelet[2636]: E1008 19:53:35.925253 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.925546 kubelet[2636]: E1008 19:53:35.925506 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.925546 kubelet[2636]: W1008 19:53:35.925521 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.925635 kubelet[2636]: E1008 19:53:35.925570 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.925825 kubelet[2636]: E1008 19:53:35.925806 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.925825 kubelet[2636]: W1008 19:53:35.925821 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.925914 kubelet[2636]: E1008 19:53:35.925835 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.926062 kubelet[2636]: E1008 19:53:35.926045 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.926062 kubelet[2636]: W1008 19:53:35.926059 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.926330 kubelet[2636]: E1008 19:53:35.926078 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.926399 kubelet[2636]: E1008 19:53:35.926372 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.926399 kubelet[2636]: W1008 19:53:35.926387 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.926605 kubelet[2636]: E1008 19:53:35.926454 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.926797 kubelet[2636]: E1008 19:53:35.926777 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.926797 kubelet[2636]: W1008 19:53:35.926791 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.926892 kubelet[2636]: E1008 19:53:35.926854 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.927095 kubelet[2636]: E1008 19:53:35.927078 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.927095 kubelet[2636]: W1008 19:53:35.927092 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.927179 kubelet[2636]: E1008 19:53:35.927127 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.928834 kubelet[2636]: E1008 19:53:35.928796 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.928834 kubelet[2636]: W1008 19:53:35.928816 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.928975 kubelet[2636]: E1008 19:53:35.928954 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.929087 kubelet[2636]: E1008 19:53:35.929068 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.929124 kubelet[2636]: W1008 19:53:35.929087 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.929240 kubelet[2636]: E1008 19:53:35.929199 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.929377 kubelet[2636]: E1008 19:53:35.929357 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.929377 kubelet[2636]: W1008 19:53:35.929395 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.929500 kubelet[2636]: E1008 19:53:35.929451 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.931661 kubelet[2636]: E1008 19:53:35.931633 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.931661 kubelet[2636]: W1008 19:53:35.931651 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.931814 kubelet[2636]: E1008 19:53:35.931786 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.931932 kubelet[2636]: E1008 19:53:35.931908 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.931932 kubelet[2636]: W1008 19:53:35.931918 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.931998 kubelet[2636]: E1008 19:53:35.931934 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.932905 kubelet[2636]: E1008 19:53:35.932873 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.932976 kubelet[2636]: W1008 19:53:35.932901 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.932976 kubelet[2636]: E1008 19:53:35.932925 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.933562 kubelet[2636]: E1008 19:53:35.933547 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.933562 kubelet[2636]: W1008 19:53:35.933561 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.933654 kubelet[2636]: E1008 19:53:35.933577 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.951705 kubelet[2636]: E1008 19:53:35.951662 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:35.951705 kubelet[2636]: W1008 19:53:35.951687 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:35.951705 kubelet[2636]: E1008 19:53:35.951720 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:35.974629 kubelet[2636]: E1008 19:53:35.974572 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:35.975341 containerd[1474]: time="2024-10-08T19:53:35.975290332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57c45c6cbd-jwmx4,Uid:aaec0c94-2260-437f-b84a-ee70c58b5e60,Namespace:calico-system,Attempt:0,}" Oct 8 19:53:35.996457 kubelet[2636]: E1008 19:53:35.996156 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:35.997970 containerd[1474]: time="2024-10-08T19:53:35.997434731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9tpgx,Uid:df5bebc9-94a8-4ea9-b16c-012e141f4955,Namespace:calico-system,Attempt:0,}" Oct 8 19:53:36.041664 containerd[1474]: time="2024-10-08T19:53:36.038034390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:36.041664 containerd[1474]: time="2024-10-08T19:53:36.038110303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:36.041664 containerd[1474]: time="2024-10-08T19:53:36.038125060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:36.041664 containerd[1474]: time="2024-10-08T19:53:36.038366213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:36.052850 containerd[1474]: time="2024-10-08T19:53:36.052553537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:36.052850 containerd[1474]: time="2024-10-08T19:53:36.052613870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:36.052850 containerd[1474]: time="2024-10-08T19:53:36.052632124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:36.052850 containerd[1474]: time="2024-10-08T19:53:36.052764443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:36.064281 systemd[1]: Started cri-containerd-f427a1a5444dfd8c2cd4e839b58d92a18ea760917744b116cd4c065f30f9b6ce.scope - libcontainer container f427a1a5444dfd8c2cd4e839b58d92a18ea760917744b116cd4c065f30f9b6ce. Oct 8 19:53:36.081681 systemd[1]: Started cri-containerd-779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f.scope - libcontainer container 779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f. Oct 8 19:53:36.128751 containerd[1474]: time="2024-10-08T19:53:36.127165955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9tpgx,Uid:df5bebc9-94a8-4ea9-b16c-012e141f4955,Namespace:calico-system,Attempt:0,} returns sandbox id \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\"" Oct 8 19:53:36.128751 containerd[1474]: time="2024-10-08T19:53:36.128226124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57c45c6cbd-jwmx4,Uid:aaec0c94-2260-437f-b84a-ee70c58b5e60,Namespace:calico-system,Attempt:0,} returns sandbox id \"f427a1a5444dfd8c2cd4e839b58d92a18ea760917744b116cd4c065f30f9b6ce\"" Oct 8 19:53:36.129877 kubelet[2636]: E1008 19:53:36.129841 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:36.130574 kubelet[2636]: E1008 19:53:36.130395 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:36.132078 containerd[1474]: time="2024-10-08T19:53:36.131898556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:53:37.875374 kubelet[2636]: E1008 19:53:37.875284 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:39.874701 kubelet[2636]: E1008 19:53:39.874615 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:40.645678 containerd[1474]: time="2024-10-08T19:53:40.645574900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:40.646387 containerd[1474]: time="2024-10-08T19:53:40.646314298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 19:53:40.647777 containerd[1474]: time="2024-10-08T19:53:40.647731556Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:40.650695 containerd[1474]: time="2024-10-08T19:53:40.650635355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:40.651326 containerd[1474]: time="2024-10-08T19:53:40.651276257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 4.519216348s" Oct 8 19:53:40.651326 containerd[1474]: time="2024-10-08T19:53:40.651323697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 19:53:40.652837 containerd[1474]: time="2024-10-08T19:53:40.652793655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:53:40.668679 containerd[1474]: time="2024-10-08T19:53:40.668629127Z" level=info msg="CreateContainer within sandbox \"f427a1a5444dfd8c2cd4e839b58d92a18ea760917744b116cd4c065f30f9b6ce\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:53:40.690736 containerd[1474]: time="2024-10-08T19:53:40.690664473Z" level=info msg="CreateContainer within sandbox \"f427a1a5444dfd8c2cd4e839b58d92a18ea760917744b116cd4c065f30f9b6ce\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6fba9c4016f617e210a370efd84f037bfe486370010b7ec8af19d1be61374f33\"" Oct 8 19:53:40.691382 containerd[1474]: time="2024-10-08T19:53:40.691343356Z" level=info msg="StartContainer for \"6fba9c4016f617e210a370efd84f037bfe486370010b7ec8af19d1be61374f33\"" Oct 8 19:53:40.735886 systemd[1]: Started cri-containerd-6fba9c4016f617e210a370efd84f037bfe486370010b7ec8af19d1be61374f33.scope - libcontainer container 6fba9c4016f617e210a370efd84f037bfe486370010b7ec8af19d1be61374f33. Oct 8 19:53:40.792281 containerd[1474]: time="2024-10-08T19:53:40.792201197Z" level=info msg="StartContainer for \"6fba9c4016f617e210a370efd84f037bfe486370010b7ec8af19d1be61374f33\" returns successfully" Oct 8 19:53:40.978424 kubelet[2636]: E1008 19:53:40.978243 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:40.991263 kubelet[2636]: I1008 19:53:40.991122 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57c45c6cbd-jwmx4" podStartSLOduration=1.469986416 podStartE2EDuration="5.991091389s" podCreationTimestamp="2024-10-08 19:53:35 +0000 UTC" firstStartedPulling="2024-10-08 19:53:36.131239168 +0000 UTC m=+23.344864099" lastFinishedPulling="2024-10-08 19:53:40.652344152 +0000 UTC m=+27.865969072" observedRunningTime="2024-10-08 19:53:40.98949303 +0000 UTC m=+28.203117960" watchObservedRunningTime="2024-10-08 19:53:40.991091389 +0000 UTC m=+28.204716329" Oct 8 19:53:41.044164 kubelet[2636]: E1008 19:53:41.044102 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.044164 kubelet[2636]: W1008 19:53:41.044145 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.044428 kubelet[2636]: E1008 19:53:41.044176 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.044488 kubelet[2636]: E1008 19:53:41.044429 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.044488 kubelet[2636]: W1008 19:53:41.044439 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.044488 kubelet[2636]: E1008 19:53:41.044460 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.044846 kubelet[2636]: E1008 19:53:41.044809 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.044846 kubelet[2636]: W1008 19:53:41.044839 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.044940 kubelet[2636]: E1008 19:53:41.044872 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.045178 kubelet[2636]: E1008 19:53:41.045161 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.045178 kubelet[2636]: W1008 19:53:41.045171 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.045178 kubelet[2636]: E1008 19:53:41.045179 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.045403 kubelet[2636]: E1008 19:53:41.045388 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.045403 kubelet[2636]: W1008 19:53:41.045398 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.045494 kubelet[2636]: E1008 19:53:41.045406 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.045682 kubelet[2636]: E1008 19:53:41.045668 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.045682 kubelet[2636]: W1008 19:53:41.045679 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.045739 kubelet[2636]: E1008 19:53:41.045689 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.045906 kubelet[2636]: E1008 19:53:41.045887 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.045906 kubelet[2636]: W1008 19:53:41.045899 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.045989 kubelet[2636]: E1008 19:53:41.045910 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.046154 kubelet[2636]: E1008 19:53:41.046138 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.046154 kubelet[2636]: W1008 19:53:41.046148 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.046154 kubelet[2636]: E1008 19:53:41.046156 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.046369 kubelet[2636]: E1008 19:53:41.046354 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.046369 kubelet[2636]: W1008 19:53:41.046366 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.046470 kubelet[2636]: E1008 19:53:41.046375 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.046619 kubelet[2636]: E1008 19:53:41.046604 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.046619 kubelet[2636]: W1008 19:53:41.046615 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.046689 kubelet[2636]: E1008 19:53:41.046624 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.046821 kubelet[2636]: E1008 19:53:41.046807 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.046821 kubelet[2636]: W1008 19:53:41.046817 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.046890 kubelet[2636]: E1008 19:53:41.046825 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.047010 kubelet[2636]: E1008 19:53:41.046997 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.047010 kubelet[2636]: W1008 19:53:41.047007 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.047064 kubelet[2636]: E1008 19:53:41.047014 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.047246 kubelet[2636]: E1008 19:53:41.047225 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.047287 kubelet[2636]: W1008 19:53:41.047246 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.047287 kubelet[2636]: E1008 19:53:41.047261 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.047657 kubelet[2636]: E1008 19:53:41.047638 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.047657 kubelet[2636]: W1008 19:53:41.047651 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.047657 kubelet[2636]: E1008 19:53:41.047660 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.047879 kubelet[2636]: E1008 19:53:41.047862 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.047879 kubelet[2636]: W1008 19:53:41.047874 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.047879 kubelet[2636]: E1008 19:53:41.047882 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.064380 kubelet[2636]: E1008 19:53:41.064334 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.064380 kubelet[2636]: W1008 19:53:41.064362 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.064595 kubelet[2636]: E1008 19:53:41.064393 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.064990 kubelet[2636]: E1008 19:53:41.064811 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.064990 kubelet[2636]: W1008 19:53:41.064841 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.064990 kubelet[2636]: E1008 19:53:41.064876 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.065307 kubelet[2636]: E1008 19:53:41.065254 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.065361 kubelet[2636]: W1008 19:53:41.065308 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.065361 kubelet[2636]: E1008 19:53:41.065346 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.065720 kubelet[2636]: E1008 19:53:41.065701 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.065720 kubelet[2636]: W1008 19:53:41.065717 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.065794 kubelet[2636]: E1008 19:53:41.065735 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.065985 kubelet[2636]: E1008 19:53:41.065969 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.065985 kubelet[2636]: W1008 19:53:41.065981 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.066044 kubelet[2636]: E1008 19:53:41.065996 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.066231 kubelet[2636]: E1008 19:53:41.066213 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.066231 kubelet[2636]: W1008 19:53:41.066228 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.066288 kubelet[2636]: E1008 19:53:41.066248 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.066511 kubelet[2636]: E1008 19:53:41.066494 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.066511 kubelet[2636]: W1008 19:53:41.066509 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.066616 kubelet[2636]: E1008 19:53:41.066553 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.066856 kubelet[2636]: E1008 19:53:41.066835 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.066888 kubelet[2636]: W1008 19:53:41.066857 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.066921 kubelet[2636]: E1008 19:53:41.066900 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.067135 kubelet[2636]: E1008 19:53:41.067115 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.067173 kubelet[2636]: W1008 19:53:41.067134 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.067173 kubelet[2636]: E1008 19:53:41.067157 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.067596 kubelet[2636]: E1008 19:53:41.067564 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.067596 kubelet[2636]: W1008 19:53:41.067587 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.067668 kubelet[2636]: E1008 19:53:41.067602 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.067827 kubelet[2636]: E1008 19:53:41.067807 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.067827 kubelet[2636]: W1008 19:53:41.067819 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.067869 kubelet[2636]: E1008 19:53:41.067835 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.068085 kubelet[2636]: E1008 19:53:41.068063 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.068085 kubelet[2636]: W1008 19:53:41.068075 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.068144 kubelet[2636]: E1008 19:53:41.068091 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.068523 kubelet[2636]: E1008 19:53:41.068487 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.068590 kubelet[2636]: W1008 19:53:41.068523 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.068590 kubelet[2636]: E1008 19:53:41.068580 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.068847 kubelet[2636]: E1008 19:53:41.068810 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.068847 kubelet[2636]: W1008 19:53:41.068830 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.068847 kubelet[2636]: E1008 19:53:41.068853 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.069192 kubelet[2636]: E1008 19:53:41.069163 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.069192 kubelet[2636]: W1008 19:53:41.069177 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.069294 kubelet[2636]: E1008 19:53:41.069213 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.069468 kubelet[2636]: E1008 19:53:41.069407 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.069468 kubelet[2636]: W1008 19:53:41.069421 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.069607 kubelet[2636]: E1008 19:53:41.069471 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.069723 kubelet[2636]: E1008 19:53:41.069701 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.069723 kubelet[2636]: W1008 19:53:41.069716 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.069791 kubelet[2636]: E1008 19:53:41.069735 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.070083 kubelet[2636]: E1008 19:53:41.070062 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:41.070083 kubelet[2636]: W1008 19:53:41.070075 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:41.070083 kubelet[2636]: E1008 19:53:41.070085 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:41.874727 kubelet[2636]: E1008 19:53:41.874673 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:41.979551 kubelet[2636]: I1008 19:53:41.979497 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:53:41.980138 kubelet[2636]: E1008 19:53:41.980123 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:42.057658 kubelet[2636]: E1008 19:53:42.057619 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.057658 kubelet[2636]: W1008 19:53:42.057647 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.057658 kubelet[2636]: E1008 19:53:42.057670 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.058364 kubelet[2636]: E1008 19:53:42.058348 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.058364 kubelet[2636]: W1008 19:53:42.058363 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.058433 kubelet[2636]: E1008 19:53:42.058376 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.058683 kubelet[2636]: E1008 19:53:42.058663 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.058683 kubelet[2636]: W1008 19:53:42.058678 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.058763 kubelet[2636]: E1008 19:53:42.058691 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.058964 kubelet[2636]: E1008 19:53:42.058949 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.058994 kubelet[2636]: W1008 19:53:42.058963 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.058994 kubelet[2636]: E1008 19:53:42.058975 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.059234 kubelet[2636]: E1008 19:53:42.059218 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.059288 kubelet[2636]: W1008 19:53:42.059235 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.059288 kubelet[2636]: E1008 19:53:42.059248 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.059601 kubelet[2636]: E1008 19:53:42.059585 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.059601 kubelet[2636]: W1008 19:53:42.059600 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.059671 kubelet[2636]: E1008 19:53:42.059613 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.059878 kubelet[2636]: E1008 19:53:42.059863 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.059878 kubelet[2636]: W1008 19:53:42.059877 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.059934 kubelet[2636]: E1008 19:53:42.059888 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.060111 kubelet[2636]: E1008 19:53:42.060096 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.060111 kubelet[2636]: W1008 19:53:42.060110 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.060167 kubelet[2636]: E1008 19:53:42.060121 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.060543 kubelet[2636]: E1008 19:53:42.060516 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.060597 kubelet[2636]: W1008 19:53:42.060544 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.060597 kubelet[2636]: E1008 19:53:42.060568 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.060809 kubelet[2636]: E1008 19:53:42.060795 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.060809 kubelet[2636]: W1008 19:53:42.060808 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.060865 kubelet[2636]: E1008 19:53:42.060818 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.061037 kubelet[2636]: E1008 19:53:42.061025 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.061037 kubelet[2636]: W1008 19:53:42.061035 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.061078 kubelet[2636]: E1008 19:53:42.061043 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.061251 kubelet[2636]: E1008 19:53:42.061240 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.061273 kubelet[2636]: W1008 19:53:42.061250 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.061273 kubelet[2636]: E1008 19:53:42.061260 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.061511 kubelet[2636]: E1008 19:53:42.061498 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.061511 kubelet[2636]: W1008 19:53:42.061510 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.061591 kubelet[2636]: E1008 19:53:42.061519 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.061751 kubelet[2636]: E1008 19:53:42.061738 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.061751 kubelet[2636]: W1008 19:53:42.061749 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.061800 kubelet[2636]: E1008 19:53:42.061763 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.061966 kubelet[2636]: E1008 19:53:42.061954 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.061966 kubelet[2636]: W1008 19:53:42.061965 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.062024 kubelet[2636]: E1008 19:53:42.061973 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.074545 kubelet[2636]: E1008 19:53:42.074480 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.074545 kubelet[2636]: W1008 19:53:42.074505 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.074545 kubelet[2636]: E1008 19:53:42.074567 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.074903 kubelet[2636]: E1008 19:53:42.074859 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.074932 kubelet[2636]: W1008 19:53:42.074901 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.074956 kubelet[2636]: E1008 19:53:42.074935 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.075333 kubelet[2636]: E1008 19:53:42.075313 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.075333 kubelet[2636]: W1008 19:53:42.075326 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.075408 kubelet[2636]: E1008 19:53:42.075340 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.075645 kubelet[2636]: E1008 19:53:42.075623 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.075645 kubelet[2636]: W1008 19:53:42.075637 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.075693 kubelet[2636]: E1008 19:53:42.075652 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.075894 kubelet[2636]: E1008 19:53:42.075879 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.075894 kubelet[2636]: W1008 19:53:42.075890 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.075957 kubelet[2636]: E1008 19:53:42.075918 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.076099 kubelet[2636]: E1008 19:53:42.076082 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.076099 kubelet[2636]: W1008 19:53:42.076096 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.076142 kubelet[2636]: E1008 19:53:42.076132 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.076304 kubelet[2636]: E1008 19:53:42.076289 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.076304 kubelet[2636]: W1008 19:53:42.076300 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.076367 kubelet[2636]: E1008 19:53:42.076325 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.076499 kubelet[2636]: E1008 19:53:42.076484 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.076499 kubelet[2636]: W1008 19:53:42.076495 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.076575 kubelet[2636]: E1008 19:53:42.076512 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.076798 kubelet[2636]: E1008 19:53:42.076773 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.076798 kubelet[2636]: W1008 19:53:42.076789 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.076849 kubelet[2636]: E1008 19:53:42.076820 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.077012 kubelet[2636]: E1008 19:53:42.076998 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.077012 kubelet[2636]: W1008 19:53:42.077009 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.077059 kubelet[2636]: E1008 19:53:42.077022 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.077290 kubelet[2636]: E1008 19:53:42.077271 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.077317 kubelet[2636]: W1008 19:53:42.077288 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.077352 kubelet[2636]: E1008 19:53:42.077317 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.077575 kubelet[2636]: E1008 19:53:42.077524 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.077575 kubelet[2636]: W1008 19:53:42.077572 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.077628 kubelet[2636]: E1008 19:53:42.077589 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.077857 kubelet[2636]: E1008 19:53:42.077837 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.077857 kubelet[2636]: W1008 19:53:42.077853 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.077918 kubelet[2636]: E1008 19:53:42.077869 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.078086 kubelet[2636]: E1008 19:53:42.078070 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.078086 kubelet[2636]: W1008 19:53:42.078085 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.078136 kubelet[2636]: E1008 19:53:42.078099 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.078297 kubelet[2636]: E1008 19:53:42.078281 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.078297 kubelet[2636]: W1008 19:53:42.078293 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.078372 kubelet[2636]: E1008 19:53:42.078307 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.078553 kubelet[2636]: E1008 19:53:42.078512 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.078553 kubelet[2636]: W1008 19:53:42.078523 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.078599 kubelet[2636]: E1008 19:53:42.078561 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.078794 kubelet[2636]: E1008 19:53:42.078777 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.078794 kubelet[2636]: W1008 19:53:42.078790 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.078855 kubelet[2636]: E1008 19:53:42.078803 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.079053 kubelet[2636]: E1008 19:53:42.079037 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:42.079053 kubelet[2636]: W1008 19:53:42.079049 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:42.079101 kubelet[2636]: E1008 19:53:42.079058 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:42.570817 containerd[1474]: time="2024-10-08T19:53:42.570734722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:42.573958 containerd[1474]: time="2024-10-08T19:53:42.573854017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 19:53:42.575483 containerd[1474]: time="2024-10-08T19:53:42.575445888Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:42.578043 containerd[1474]: time="2024-10-08T19:53:42.577983153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:42.578888 containerd[1474]: time="2024-10-08T19:53:42.578823946Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.925979746s" Oct 8 19:53:42.578930 containerd[1474]: time="2024-10-08T19:53:42.578888181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 19:53:42.581271 containerd[1474]: time="2024-10-08T19:53:42.581220909Z" level=info msg="CreateContainer within sandbox \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:53:42.603323 containerd[1474]: time="2024-10-08T19:53:42.603265364Z" level=info msg="CreateContainer within sandbox \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\"" Oct 8 19:53:42.604103 containerd[1474]: time="2024-10-08T19:53:42.603942708Z" level=info msg="StartContainer for \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\"" Oct 8 19:53:42.639842 systemd[1]: Started cri-containerd-be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc.scope - libcontainer container be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc. Oct 8 19:53:42.703032 systemd[1]: cri-containerd-be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc.scope: Deactivated successfully. Oct 8 19:53:43.964482 containerd[1474]: time="2024-10-08T19:53:43.964393145Z" level=info msg="StartContainer for \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\" returns successfully" Oct 8 19:53:43.966756 kubelet[2636]: E1008 19:53:43.966716 2636 kubelet.go:2511] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.091s" Oct 8 19:53:43.967320 kubelet[2636]: E1008 19:53:43.966958 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:43.969343 containerd[1474]: time="2024-10-08T19:53:43.968721750Z" level=info msg="StopContainer for \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\" with timeout 5 (s)" Oct 8 19:53:43.975808 containerd[1474]: time="2024-10-08T19:53:43.975747225Z" level=info msg="Stop container \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\" with signal terminated" Oct 8 19:53:43.999258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc-rootfs.mount: Deactivated successfully. Oct 8 19:53:44.675818 containerd[1474]: time="2024-10-08T19:53:44.672726483Z" level=info msg="shim disconnected" id=be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc namespace=k8s.io Oct 8 19:53:44.675818 containerd[1474]: time="2024-10-08T19:53:44.675805018Z" level=warning msg="cleaning up after shim disconnected" id=be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc namespace=k8s.io Oct 8 19:53:44.675818 containerd[1474]: time="2024-10-08T19:53:44.675828503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:53:44.695594 containerd[1474]: time="2024-10-08T19:53:44.695478608Z" level=info msg="StopContainer for \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\" returns successfully" Oct 8 19:53:44.696324 containerd[1474]: time="2024-10-08T19:53:44.696275893Z" level=info msg="StopPodSandbox for \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\"" Oct 8 19:53:44.696324 containerd[1474]: time="2024-10-08T19:53:44.696328665Z" level=info msg="Container to stop \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:53:44.701861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f-shm.mount: Deactivated successfully. Oct 8 19:53:44.704289 systemd[1]: cri-containerd-779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f.scope: Deactivated successfully. Oct 8 19:53:44.730160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f-rootfs.mount: Deactivated successfully. Oct 8 19:53:44.737354 containerd[1474]: time="2024-10-08T19:53:44.737275036Z" level=info msg="shim disconnected" id=779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f namespace=k8s.io Oct 8 19:53:44.737354 containerd[1474]: time="2024-10-08T19:53:44.737339941Z" level=warning msg="cleaning up after shim disconnected" id=779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f namespace=k8s.io Oct 8 19:53:44.737354 containerd[1474]: time="2024-10-08T19:53:44.737349400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:53:44.756046 containerd[1474]: time="2024-10-08T19:53:44.755986452Z" level=info msg="TearDown network for sandbox \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\" successfully" Oct 8 19:53:44.756046 containerd[1474]: time="2024-10-08T19:53:44.756033102Z" level=info msg="StopPodSandbox for \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\" returns successfully" Oct 8 19:53:44.797508 kubelet[2636]: I1008 19:53:44.797457 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgjwr\" (UniqueName: \"kubernetes.io/projected/df5bebc9-94a8-4ea9-b16c-012e141f4955-kube-api-access-rgjwr\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.797508 kubelet[2636]: I1008 19:53:44.797508 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-net-dir\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.797814 kubelet[2636]: I1008 19:53:44.797552 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-bin-dir\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.797814 kubelet[2636]: I1008 19:53:44.797573 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-log-dir\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.797814 kubelet[2636]: I1008 19:53:44.797591 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-xtables-lock\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.797814 kubelet[2636]: I1008 19:53:44.797620 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/df5bebc9-94a8-4ea9-b16c-012e141f4955-node-certs\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.797814 kubelet[2636]: I1008 19:53:44.797642 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df5bebc9-94a8-4ea9-b16c-012e141f4955-tigera-ca-bundle\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.797814 kubelet[2636]: I1008 19:53:44.797674 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-var-run-calico\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.798052 kubelet[2636]: I1008 19:53:44.797692 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-flexvol-driver-host\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.798052 kubelet[2636]: I1008 19:53:44.797712 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-policysync\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.798052 kubelet[2636]: I1008 19:53:44.797730 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-lib-modules\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.798052 kubelet[2636]: I1008 19:53:44.797755 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-var-lib-calico\") pod \"df5bebc9-94a8-4ea9-b16c-012e141f4955\" (UID: \"df5bebc9-94a8-4ea9-b16c-012e141f4955\") " Oct 8 19:53:44.798052 kubelet[2636]: I1008 19:53:44.797832 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:53:44.798244 kubelet[2636]: I1008 19:53:44.797879 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:53:44.798244 kubelet[2636]: I1008 19:53:44.797901 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:53:44.798244 kubelet[2636]: I1008 19:53:44.797921 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:53:44.798244 kubelet[2636]: I1008 19:53:44.797939 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:53:44.798244 kubelet[2636]: I1008 19:53:44.798034 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:53:44.798436 kubelet[2636]: I1008 19:53:44.798074 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:53:44.798436 kubelet[2636]: I1008 19:53:44.798135 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-policysync" (OuterVolumeSpecName: "policysync") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:53:44.798436 kubelet[2636]: I1008 19:53:44.798160 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:53:44.798666 kubelet[2636]: I1008 19:53:44.798590 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5bebc9-94a8-4ea9-b16c-012e141f4955-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:53:44.802163 kubelet[2636]: I1008 19:53:44.802089 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df5bebc9-94a8-4ea9-b16c-012e141f4955-node-certs" (OuterVolumeSpecName: "node-certs") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:53:44.802163 kubelet[2636]: I1008 19:53:44.802110 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5bebc9-94a8-4ea9-b16c-012e141f4955-kube-api-access-rgjwr" (OuterVolumeSpecName: "kube-api-access-rgjwr") pod "df5bebc9-94a8-4ea9-b16c-012e141f4955" (UID: "df5bebc9-94a8-4ea9-b16c-012e141f4955"). InnerVolumeSpecName "kube-api-access-rgjwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:53:44.804451 systemd[1]: var-lib-kubelet-pods-df5bebc9\x2d94a8\x2d4ea9\x2db16c\x2d012e141f4955-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drgjwr.mount: Deactivated successfully. Oct 8 19:53:44.804634 systemd[1]: var-lib-kubelet-pods-df5bebc9\x2d94a8\x2d4ea9\x2db16c\x2d012e141f4955-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Oct 8 19:53:44.885035 systemd[1]: Removed slice kubepods-besteffort-poddf5bebc9_94a8_4ea9_b16c_012e141f4955.slice - libcontainer container kubepods-besteffort-poddf5bebc9_94a8_4ea9_b16c_012e141f4955.slice. Oct 8 19:53:44.898326 kubelet[2636]: I1008 19:53:44.898282 2636 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-policysync\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898326 kubelet[2636]: I1008 19:53:44.898312 2636 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898326 kubelet[2636]: I1008 19:53:44.898321 2636 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898326 kubelet[2636]: I1008 19:53:44.898333 2636 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rgjwr\" (UniqueName: \"kubernetes.io/projected/df5bebc9-94a8-4ea9-b16c-012e141f4955-kube-api-access-rgjwr\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898326 kubelet[2636]: I1008 19:53:44.898343 2636 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898623 kubelet[2636]: I1008 19:53:44.898351 2636 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898623 kubelet[2636]: I1008 19:53:44.898359 2636 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898623 kubelet[2636]: I1008 19:53:44.898367 2636 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898623 kubelet[2636]: I1008 19:53:44.898375 2636 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/df5bebc9-94a8-4ea9-b16c-012e141f4955-node-certs\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898623 kubelet[2636]: I1008 19:53:44.898382 2636 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df5bebc9-94a8-4ea9-b16c-012e141f4955-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898623 kubelet[2636]: I1008 19:53:44.898390 2636 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-var-run-calico\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.898623 kubelet[2636]: I1008 19:53:44.898398 2636 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/df5bebc9-94a8-4ea9-b16c-012e141f4955-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Oct 8 19:53:44.971713 kubelet[2636]: I1008 19:53:44.971397 2636 scope.go:117] "RemoveContainer" containerID="be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc" Oct 8 19:53:44.973899 containerd[1474]: time="2024-10-08T19:53:44.973844789Z" level=info msg="RemoveContainer for \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\"" Oct 8 19:53:44.978907 containerd[1474]: time="2024-10-08T19:53:44.978858674Z" level=info msg="RemoveContainer for \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\" returns successfully" Oct 8 19:53:44.979620 kubelet[2636]: I1008 19:53:44.979578 2636 scope.go:117] "RemoveContainer" containerID="be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc" Oct 8 19:53:44.981956 containerd[1474]: time="2024-10-08T19:53:44.981900408Z" level=error msg="ContainerStatus for \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\": not found" Oct 8 19:53:44.984121 kubelet[2636]: E1008 19:53:44.984081 2636 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\": not found" containerID="be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc" Oct 8 19:53:44.984201 kubelet[2636]: I1008 19:53:44.984124 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc"} err="failed to get container status \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"be5a9f14cd69c179c0e5842fd78b7afca676e70435593644367d14240c9898cc\": not found" Oct 8 19:53:45.006443 kubelet[2636]: I1008 19:53:45.006377 2636 topology_manager.go:215] "Topology Admit Handler" podUID="305fcc0b-fabd-45c7-8c69-f457894e93d2" podNamespace="calico-system" podName="calico-node-mg9zh" Oct 8 19:53:45.006636 kubelet[2636]: E1008 19:53:45.006462 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df5bebc9-94a8-4ea9-b16c-012e141f4955" containerName="flexvol-driver" Oct 8 19:53:45.006636 kubelet[2636]: I1008 19:53:45.006498 2636 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5bebc9-94a8-4ea9-b16c-012e141f4955" containerName="flexvol-driver" Oct 8 19:53:45.015731 systemd[1]: Created slice kubepods-besteffort-pod305fcc0b_fabd_45c7_8c69_f457894e93d2.slice - libcontainer container kubepods-besteffort-pod305fcc0b_fabd_45c7_8c69_f457894e93d2.slice. Oct 8 19:53:45.099852 kubelet[2636]: I1008 19:53:45.099771 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/305fcc0b-fabd-45c7-8c69-f457894e93d2-var-run-calico\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.099852 kubelet[2636]: I1008 19:53:45.099817 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/305fcc0b-fabd-45c7-8c69-f457894e93d2-flexvol-driver-host\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.099852 kubelet[2636]: I1008 19:53:45.099838 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/305fcc0b-fabd-45c7-8c69-f457894e93d2-node-certs\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.099852 kubelet[2636]: I1008 19:53:45.099854 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/305fcc0b-fabd-45c7-8c69-f457894e93d2-cni-net-dir\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.100173 kubelet[2636]: I1008 19:53:45.099870 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/305fcc0b-fabd-45c7-8c69-f457894e93d2-cni-log-dir\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.100173 kubelet[2636]: I1008 19:53:45.100001 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/305fcc0b-fabd-45c7-8c69-f457894e93d2-policysync\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.100173 kubelet[2636]: I1008 19:53:45.100055 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/305fcc0b-fabd-45c7-8c69-f457894e93d2-var-lib-calico\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.100173 kubelet[2636]: I1008 19:53:45.100091 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/305fcc0b-fabd-45c7-8c69-f457894e93d2-lib-modules\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.100173 kubelet[2636]: I1008 19:53:45.100113 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2th2\" (UniqueName: \"kubernetes.io/projected/305fcc0b-fabd-45c7-8c69-f457894e93d2-kube-api-access-j2th2\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.100345 kubelet[2636]: I1008 19:53:45.100148 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/305fcc0b-fabd-45c7-8c69-f457894e93d2-tigera-ca-bundle\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.100345 kubelet[2636]: I1008 19:53:45.100170 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/305fcc0b-fabd-45c7-8c69-f457894e93d2-xtables-lock\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.100345 kubelet[2636]: I1008 19:53:45.100190 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/305fcc0b-fabd-45c7-8c69-f457894e93d2-cni-bin-dir\") pod \"calico-node-mg9zh\" (UID: \"305fcc0b-fabd-45c7-8c69-f457894e93d2\") " pod="calico-system/calico-node-mg9zh" Oct 8 19:53:45.321901 kubelet[2636]: E1008 19:53:45.321837 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:45.323995 containerd[1474]: time="2024-10-08T19:53:45.323940653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mg9zh,Uid:305fcc0b-fabd-45c7-8c69-f457894e93d2,Namespace:calico-system,Attempt:0,}" Oct 8 19:53:45.742812 containerd[1474]: time="2024-10-08T19:53:45.742589224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:45.742812 containerd[1474]: time="2024-10-08T19:53:45.742659411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:45.742812 containerd[1474]: time="2024-10-08T19:53:45.742684569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:45.742812 containerd[1474]: time="2024-10-08T19:53:45.742774814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:45.765687 systemd[1]: Started cri-containerd-8e5b3c9ee4daa75a2f4c65f5e3b4616cb0211cf3b4ef569af4aa257908b580c1.scope - libcontainer container 8e5b3c9ee4daa75a2f4c65f5e3b4616cb0211cf3b4ef569af4aa257908b580c1. Oct 8 19:53:45.809179 containerd[1474]: time="2024-10-08T19:53:45.809116408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mg9zh,Uid:305fcc0b-fabd-45c7-8c69-f457894e93d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e5b3c9ee4daa75a2f4c65f5e3b4616cb0211cf3b4ef569af4aa257908b580c1\"" Oct 8 19:53:45.809987 kubelet[2636]: E1008 19:53:45.809927 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:45.812377 containerd[1474]: time="2024-10-08T19:53:45.812307025Z" level=info msg="CreateContainer within sandbox \"8e5b3c9ee4daa75a2f4c65f5e3b4616cb0211cf3b4ef569af4aa257908b580c1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:53:45.820997 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:55880.service - OpenSSH per-connection server daemon (10.0.0.1:55880). Oct 8 19:53:45.831769 containerd[1474]: time="2024-10-08T19:53:45.831706130Z" level=info msg="CreateContainer within sandbox \"8e5b3c9ee4daa75a2f4c65f5e3b4616cb0211cf3b4ef569af4aa257908b580c1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ef9381e5bfb633b3bd7bcb62b7a2e323fd2ee6bbfb8e27a5be87afffdd4eb400\"" Oct 8 19:53:45.833668 containerd[1474]: time="2024-10-08T19:53:45.832883690Z" level=info msg="StartContainer for \"ef9381e5bfb633b3bd7bcb62b7a2e323fd2ee6bbfb8e27a5be87afffdd4eb400\"" Oct 8 19:53:45.855109 sshd[3435]: Accepted publickey for core from 10.0.0.1 port 55880 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:45.856974 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:45.863934 systemd-logind[1458]: New session 8 of user core. Oct 8 19:53:45.873672 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:53:45.874711 kubelet[2636]: E1008 19:53:45.874656 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:45.877199 systemd[1]: Started cri-containerd-ef9381e5bfb633b3bd7bcb62b7a2e323fd2ee6bbfb8e27a5be87afffdd4eb400.scope - libcontainer container ef9381e5bfb633b3bd7bcb62b7a2e323fd2ee6bbfb8e27a5be87afffdd4eb400. Oct 8 19:53:45.912185 containerd[1474]: time="2024-10-08T19:53:45.912127311Z" level=info msg="StartContainer for \"ef9381e5bfb633b3bd7bcb62b7a2e323fd2ee6bbfb8e27a5be87afffdd4eb400\" returns successfully" Oct 8 19:53:45.927742 systemd[1]: cri-containerd-ef9381e5bfb633b3bd7bcb62b7a2e323fd2ee6bbfb8e27a5be87afffdd4eb400.scope: Deactivated successfully. Oct 8 19:53:45.977796 kubelet[2636]: E1008 19:53:45.977744 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:45.981075 containerd[1474]: time="2024-10-08T19:53:45.980984966Z" level=info msg="shim disconnected" id=ef9381e5bfb633b3bd7bcb62b7a2e323fd2ee6bbfb8e27a5be87afffdd4eb400 namespace=k8s.io Oct 8 19:53:45.981408 containerd[1474]: time="2024-10-08T19:53:45.981080692Z" level=warning msg="cleaning up after shim disconnected" id=ef9381e5bfb633b3bd7bcb62b7a2e323fd2ee6bbfb8e27a5be87afffdd4eb400 namespace=k8s.io Oct 8 19:53:45.981408 containerd[1474]: time="2024-10-08T19:53:45.981096021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:53:46.007547 containerd[1474]: time="2024-10-08T19:53:46.007372871Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:53:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:53:46.023749 sshd[3435]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:46.028364 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:55880.service: Deactivated successfully. Oct 8 19:53:46.030558 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:53:46.031451 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:53:46.032480 systemd-logind[1458]: Removed session 8. Oct 8 19:53:46.880558 kubelet[2636]: I1008 19:53:46.880478 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5bebc9-94a8-4ea9-b16c-012e141f4955" path="/var/lib/kubelet/pods/df5bebc9-94a8-4ea9-b16c-012e141f4955/volumes" Oct 8 19:53:46.980942 kubelet[2636]: E1008 19:53:46.980801 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:46.981805 containerd[1474]: time="2024-10-08T19:53:46.981588390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:53:47.874562 kubelet[2636]: E1008 19:53:47.874492 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:49.875403 kubelet[2636]: E1008 19:53:49.875331 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:51.036800 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:59740.service - OpenSSH per-connection server daemon (10.0.0.1:59740). Oct 8 19:53:51.118872 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 59740 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:51.120403 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:51.125276 systemd-logind[1458]: New session 9 of user core. Oct 8 19:53:51.131654 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:53:51.273447 sshd[3518]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:51.277251 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:59740.service: Deactivated successfully. Oct 8 19:53:51.280517 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:53:51.283203 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:53:51.284685 systemd-logind[1458]: Removed session 9. Oct 8 19:53:51.875299 kubelet[2636]: E1008 19:53:51.875234 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:53.535120 containerd[1474]: time="2024-10-08T19:53:53.535002155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:53.538643 containerd[1474]: time="2024-10-08T19:53:53.538545112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 19:53:53.542016 containerd[1474]: time="2024-10-08T19:53:53.541951416Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:53.548559 containerd[1474]: time="2024-10-08T19:53:53.548466812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:53.549305 containerd[1474]: time="2024-10-08T19:53:53.549262181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 6.5676365s" Oct 8 19:53:53.549305 containerd[1474]: time="2024-10-08T19:53:53.549303000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 19:53:53.552031 containerd[1474]: time="2024-10-08T19:53:53.551988497Z" level=info msg="CreateContainer within sandbox \"8e5b3c9ee4daa75a2f4c65f5e3b4616cb0211cf3b4ef569af4aa257908b580c1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:53:53.575467 containerd[1474]: time="2024-10-08T19:53:53.575401203Z" level=info msg="CreateContainer within sandbox \"8e5b3c9ee4daa75a2f4c65f5e3b4616cb0211cf3b4ef569af4aa257908b580c1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9e3c8eff17f50ff06043ca65f88c23940f6f2f18d7bdb55cf22ee3102fcdf962\"" Oct 8 19:53:53.576109 containerd[1474]: time="2024-10-08T19:53:53.576065993Z" level=info msg="StartContainer for \"9e3c8eff17f50ff06043ca65f88c23940f6f2f18d7bdb55cf22ee3102fcdf962\"" Oct 8 19:53:53.615790 systemd[1]: Started cri-containerd-9e3c8eff17f50ff06043ca65f88c23940f6f2f18d7bdb55cf22ee3102fcdf962.scope - libcontainer container 9e3c8eff17f50ff06043ca65f88c23940f6f2f18d7bdb55cf22ee3102fcdf962. Oct 8 19:53:53.650065 containerd[1474]: time="2024-10-08T19:53:53.650009880Z" level=info msg="StartContainer for \"9e3c8eff17f50ff06043ca65f88c23940f6f2f18d7bdb55cf22ee3102fcdf962\" returns successfully" Oct 8 19:53:53.874815 kubelet[2636]: E1008 19:53:53.874747 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:53.997569 kubelet[2636]: E1008 19:53:53.996711 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:55.129157 systemd[1]: cri-containerd-9e3c8eff17f50ff06043ca65f88c23940f6f2f18d7bdb55cf22ee3102fcdf962.scope: Deactivated successfully. Oct 8 19:53:55.136613 kubelet[2636]: I1008 19:53:55.136567 2636 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:53:55.151578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e3c8eff17f50ff06043ca65f88c23940f6f2f18d7bdb55cf22ee3102fcdf962-rootfs.mount: Deactivated successfully. Oct 8 19:53:55.419469 kubelet[2636]: I1008 19:53:55.419255 2636 topology_manager.go:215] "Topology Admit Handler" podUID="5680a2c6-5726-4676-ad87-66368405db02" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5b2v2" Oct 8 19:53:55.421590 kubelet[2636]: I1008 19:53:55.421434 2636 topology_manager.go:215] "Topology Admit Handler" podUID="32f3dcde-f5c9-4e4f-9205-42916a8cefb8" podNamespace="calico-system" podName="calico-kube-controllers-7f455db588-9gpdk" Oct 8 19:53:55.422576 kubelet[2636]: I1008 19:53:55.421644 2636 topology_manager.go:215] "Topology Admit Handler" podUID="da500ec5-3c08-4ed4-8012-9ada49d45be0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5z9jp" Oct 8 19:53:55.427833 systemd[1]: Created slice kubepods-burstable-pod5680a2c6_5726_4676_ad87_66368405db02.slice - libcontainer container kubepods-burstable-pod5680a2c6_5726_4676_ad87_66368405db02.slice. Oct 8 19:53:55.432592 systemd[1]: Created slice kubepods-burstable-podda500ec5_3c08_4ed4_8012_9ada49d45be0.slice - libcontainer container kubepods-burstable-podda500ec5_3c08_4ed4_8012_9ada49d45be0.slice. Oct 8 19:53:55.438011 systemd[1]: Created slice kubepods-besteffort-pod32f3dcde_f5c9_4e4f_9205_42916a8cefb8.slice - libcontainer container kubepods-besteffort-pod32f3dcde_f5c9_4e4f_9205_42916a8cefb8.slice. Oct 8 19:53:55.474710 kubelet[2636]: I1008 19:53:55.474661 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5rc\" (UniqueName: \"kubernetes.io/projected/5680a2c6-5726-4676-ad87-66368405db02-kube-api-access-rn5rc\") pod \"coredns-7db6d8ff4d-5b2v2\" (UID: \"5680a2c6-5726-4676-ad87-66368405db02\") " pod="kube-system/coredns-7db6d8ff4d-5b2v2" Oct 8 19:53:55.474710 kubelet[2636]: I1008 19:53:55.474708 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx7m8\" (UniqueName: \"kubernetes.io/projected/32f3dcde-f5c9-4e4f-9205-42916a8cefb8-kube-api-access-qx7m8\") pod \"calico-kube-controllers-7f455db588-9gpdk\" (UID: \"32f3dcde-f5c9-4e4f-9205-42916a8cefb8\") " pod="calico-system/calico-kube-controllers-7f455db588-9gpdk" Oct 8 19:53:55.474710 kubelet[2636]: I1008 19:53:55.474726 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5680a2c6-5726-4676-ad87-66368405db02-config-volume\") pod \"coredns-7db6d8ff4d-5b2v2\" (UID: \"5680a2c6-5726-4676-ad87-66368405db02\") " pod="kube-system/coredns-7db6d8ff4d-5b2v2" Oct 8 19:53:55.474967 kubelet[2636]: I1008 19:53:55.474747 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32f3dcde-f5c9-4e4f-9205-42916a8cefb8-tigera-ca-bundle\") pod \"calico-kube-controllers-7f455db588-9gpdk\" (UID: \"32f3dcde-f5c9-4e4f-9205-42916a8cefb8\") " pod="calico-system/calico-kube-controllers-7f455db588-9gpdk" Oct 8 19:53:55.474967 kubelet[2636]: I1008 19:53:55.474768 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da500ec5-3c08-4ed4-8012-9ada49d45be0-config-volume\") pod \"coredns-7db6d8ff4d-5z9jp\" (UID: \"da500ec5-3c08-4ed4-8012-9ada49d45be0\") " pod="kube-system/coredns-7db6d8ff4d-5z9jp" Oct 8 19:53:55.474967 kubelet[2636]: I1008 19:53:55.474790 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9wr7\" (UniqueName: \"kubernetes.io/projected/da500ec5-3c08-4ed4-8012-9ada49d45be0-kube-api-access-m9wr7\") pod \"coredns-7db6d8ff4d-5z9jp\" (UID: \"da500ec5-3c08-4ed4-8012-9ada49d45be0\") " pod="kube-system/coredns-7db6d8ff4d-5z9jp" Oct 8 19:53:55.731165 kubelet[2636]: E1008 19:53:55.731028 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:55.731879 containerd[1474]: time="2024-10-08T19:53:55.731828269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5b2v2,Uid:5680a2c6-5726-4676-ad87-66368405db02,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:55.736142 kubelet[2636]: E1008 19:53:55.736110 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:55.736620 containerd[1474]: time="2024-10-08T19:53:55.736580511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5z9jp,Uid:da500ec5-3c08-4ed4-8012-9ada49d45be0,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:55.741035 containerd[1474]: time="2024-10-08T19:53:55.740999642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f455db588-9gpdk,Uid:32f3dcde-f5c9-4e4f-9205-42916a8cefb8,Namespace:calico-system,Attempt:0,}" Oct 8 19:53:55.881270 systemd[1]: Created slice kubepods-besteffort-poda5b65116_575e_4269_8542_d6d284a4cec8.slice - libcontainer container kubepods-besteffort-poda5b65116_575e_4269_8542_d6d284a4cec8.slice. Oct 8 19:53:55.883855 containerd[1474]: time="2024-10-08T19:53:55.883801008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9hb6,Uid:a5b65116-575e-4269-8542-d6d284a4cec8,Namespace:calico-system,Attempt:0,}" Oct 8 19:53:55.986250 containerd[1474]: time="2024-10-08T19:53:55.986089601Z" level=info msg="shim disconnected" id=9e3c8eff17f50ff06043ca65f88c23940f6f2f18d7bdb55cf22ee3102fcdf962 namespace=k8s.io Oct 8 19:53:55.986250 containerd[1474]: time="2024-10-08T19:53:55.986153284Z" level=warning msg="cleaning up after shim disconnected" id=9e3c8eff17f50ff06043ca65f88c23940f6f2f18d7bdb55cf22ee3102fcdf962 namespace=k8s.io Oct 8 19:53:55.986250 containerd[1474]: time="2024-10-08T19:53:55.986164385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:53:56.120952 containerd[1474]: time="2024-10-08T19:53:56.120848308Z" level=error msg="Failed to destroy network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.122585 containerd[1474]: time="2024-10-08T19:53:56.121796289Z" level=error msg="encountered an error cleaning up failed sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.122585 containerd[1474]: time="2024-10-08T19:53:56.121864230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9hb6,Uid:a5b65116-575e-4269-8542-d6d284a4cec8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.122667 kubelet[2636]: E1008 19:53:56.122191 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.122667 kubelet[2636]: E1008 19:53:56.122283 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9hb6" Oct 8 19:53:56.122667 kubelet[2636]: E1008 19:53:56.122317 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9hb6" Oct 8 19:53:56.122778 kubelet[2636]: E1008 19:53:56.122382 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h9hb6_calico-system(a5b65116-575e-4269-8542-d6d284a4cec8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h9hb6_calico-system(a5b65116-575e-4269-8542-d6d284a4cec8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:56.122848 containerd[1474]: time="2024-10-08T19:53:56.122768005Z" level=error msg="Failed to destroy network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.123410 containerd[1474]: time="2024-10-08T19:53:56.123329283Z" level=error msg="encountered an error cleaning up failed sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.123547 containerd[1474]: time="2024-10-08T19:53:56.123431790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f455db588-9gpdk,Uid:32f3dcde-f5c9-4e4f-9205-42916a8cefb8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.123757 kubelet[2636]: E1008 19:53:56.123689 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.123757 kubelet[2636]: E1008 19:53:56.123734 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f455db588-9gpdk" Oct 8 19:53:56.123829 containerd[1474]: time="2024-10-08T19:53:56.123667893Z" level=error msg="Failed to destroy network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.123859 kubelet[2636]: E1008 19:53:56.123756 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f455db588-9gpdk" Oct 8 19:53:56.123859 kubelet[2636]: E1008 19:53:56.123790 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f455db588-9gpdk_calico-system(32f3dcde-f5c9-4e4f-9205-42916a8cefb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f455db588-9gpdk_calico-system(32f3dcde-f5c9-4e4f-9205-42916a8cefb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f455db588-9gpdk" podUID="32f3dcde-f5c9-4e4f-9205-42916a8cefb8" Oct 8 19:53:56.124343 containerd[1474]: time="2024-10-08T19:53:56.124303384Z" level=error msg="encountered an error cleaning up failed sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.124390 containerd[1474]: time="2024-10-08T19:53:56.124348150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5b2v2,Uid:5680a2c6-5726-4676-ad87-66368405db02,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.124574 kubelet[2636]: E1008 19:53:56.124508 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.124624 kubelet[2636]: E1008 19:53:56.124571 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5b2v2" Oct 8 19:53:56.124624 kubelet[2636]: E1008 19:53:56.124588 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5b2v2" Oct 8 19:53:56.124689 kubelet[2636]: E1008 19:53:56.124618 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5b2v2_kube-system(5680a2c6-5726-4676-ad87-66368405db02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5b2v2_kube-system(5680a2c6-5726-4676-ad87-66368405db02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5b2v2" podUID="5680a2c6-5726-4676-ad87-66368405db02" Oct 8 19:53:56.125336 containerd[1474]: time="2024-10-08T19:53:56.125304406Z" level=error msg="Failed to destroy network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.125677 containerd[1474]: time="2024-10-08T19:53:56.125650350Z" level=error msg="encountered an error cleaning up failed sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.125738 containerd[1474]: time="2024-10-08T19:53:56.125693372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5z9jp,Uid:da500ec5-3c08-4ed4-8012-9ada49d45be0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.125945 kubelet[2636]: E1008 19:53:56.125906 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:56.125983 kubelet[2636]: E1008 19:53:56.125956 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5z9jp" Oct 8 19:53:56.126020 kubelet[2636]: E1008 19:53:56.125975 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5z9jp" Oct 8 19:53:56.126052 kubelet[2636]: E1008 19:53:56.126018 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5z9jp_kube-system(da500ec5-3c08-4ed4-8012-9ada49d45be0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5z9jp_kube-system(da500ec5-3c08-4ed4-8012-9ada49d45be0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5z9jp" podUID="da500ec5-3c08-4ed4-8012-9ada49d45be0" Oct 8 19:53:56.153030 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375-shm.mount: Deactivated successfully. Oct 8 19:53:56.292667 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:59748.service - OpenSSH per-connection server daemon (10.0.0.1:59748). Oct 8 19:53:56.331329 sshd[3748]: Accepted publickey for core from 10.0.0.1 port 59748 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:56.333240 sshd[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:56.337792 systemd-logind[1458]: New session 10 of user core. Oct 8 19:53:56.349661 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:53:56.463772 sshd[3748]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:56.469381 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:59748.service: Deactivated successfully. Oct 8 19:53:56.471738 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:53:56.472391 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:53:56.473348 systemd-logind[1458]: Removed session 10. Oct 8 19:53:56.649927 kubelet[2636]: I1008 19:53:56.649762 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:53:56.650621 kubelet[2636]: E1008 19:53:56.650604 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:57.003283 kubelet[2636]: I1008 19:53:57.003137 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:53:57.003956 containerd[1474]: time="2024-10-08T19:53:57.003728472Z" level=info msg="StopPodSandbox for \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\"" Oct 8 19:53:57.004794 containerd[1474]: time="2024-10-08T19:53:57.003974585Z" level=info msg="Ensure that sandbox b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587 in task-service has been cleanup successfully" Oct 8 19:53:57.004823 kubelet[2636]: I1008 19:53:57.004191 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:53:57.005073 containerd[1474]: time="2024-10-08T19:53:57.005021574Z" level=info msg="StopPodSandbox for \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\"" Oct 8 19:53:57.005314 containerd[1474]: time="2024-10-08T19:53:57.005273227Z" level=info msg="Ensure that sandbox 17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88 in task-service has been cleanup successfully" Oct 8 19:53:57.008041 kubelet[2636]: I1008 19:53:57.008010 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:53:57.009477 containerd[1474]: time="2024-10-08T19:53:57.008875600Z" level=info msg="StopPodSandbox for \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\"" Oct 8 19:53:57.009477 containerd[1474]: time="2024-10-08T19:53:57.009082807Z" level=info msg="Ensure that sandbox b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720 in task-service has been cleanup successfully" Oct 8 19:53:57.011019 kubelet[2636]: I1008 19:53:57.010959 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:53:57.011804 containerd[1474]: time="2024-10-08T19:53:57.011776777Z" level=info msg="StopPodSandbox for \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\"" Oct 8 19:53:57.012196 containerd[1474]: time="2024-10-08T19:53:57.012169391Z" level=info msg="Ensure that sandbox b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375 in task-service has been cleanup successfully" Oct 8 19:53:57.013798 kubelet[2636]: E1008 19:53:57.013759 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:57.014060 kubelet[2636]: E1008 19:53:57.014039 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:57.015577 containerd[1474]: time="2024-10-08T19:53:57.015092711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:53:57.039820 containerd[1474]: time="2024-10-08T19:53:57.039510941Z" level=error msg="StopPodSandbox for \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\" failed" error="failed to destroy network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:57.040167 kubelet[2636]: E1008 19:53:57.040010 2636 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:53:57.040167 kubelet[2636]: E1008 19:53:57.040070 2636 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587"} Oct 8 19:53:57.040167 kubelet[2636]: E1008 19:53:57.040108 2636 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da500ec5-3c08-4ed4-8012-9ada49d45be0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:53:57.040167 kubelet[2636]: E1008 19:53:57.040132 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da500ec5-3c08-4ed4-8012-9ada49d45be0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5z9jp" podUID="da500ec5-3c08-4ed4-8012-9ada49d45be0" Oct 8 19:53:57.042417 containerd[1474]: time="2024-10-08T19:53:57.042343096Z" level=error msg="StopPodSandbox for \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\" failed" error="failed to destroy network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:57.042629 kubelet[2636]: E1008 19:53:57.042597 2636 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:53:57.042844 kubelet[2636]: E1008 19:53:57.042720 2636 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88"} Oct 8 19:53:57.042844 kubelet[2636]: E1008 19:53:57.042748 2636 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5b65116-575e-4269-8542-d6d284a4cec8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:53:57.042844 kubelet[2636]: E1008 19:53:57.042779 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5b65116-575e-4269-8542-d6d284a4cec8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9hb6" podUID="a5b65116-575e-4269-8542-d6d284a4cec8" Oct 8 19:53:57.051138 containerd[1474]: time="2024-10-08T19:53:57.051080474Z" level=error msg="StopPodSandbox for \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\" failed" error="failed to destroy network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:57.051740 kubelet[2636]: E1008 19:53:57.051691 2636 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:53:57.051740 kubelet[2636]: E1008 19:53:57.051735 2636 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375"} Oct 8 19:53:57.051969 kubelet[2636]: E1008 19:53:57.051766 2636 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5680a2c6-5726-4676-ad87-66368405db02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:53:57.051969 kubelet[2636]: E1008 19:53:57.051787 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5680a2c6-5726-4676-ad87-66368405db02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5b2v2" podUID="5680a2c6-5726-4676-ad87-66368405db02" Oct 8 19:53:57.052559 containerd[1474]: time="2024-10-08T19:53:57.052493807Z" level=error msg="StopPodSandbox for \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\" failed" error="failed to destroy network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:57.052723 kubelet[2636]: E1008 19:53:57.052680 2636 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:53:57.052723 kubelet[2636]: E1008 19:53:57.052711 2636 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720"} Oct 8 19:53:57.052829 kubelet[2636]: E1008 19:53:57.052748 2636 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32f3dcde-f5c9-4e4f-9205-42916a8cefb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:53:57.052829 kubelet[2636]: E1008 19:53:57.052767 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32f3dcde-f5c9-4e4f-9205-42916a8cefb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f455db588-9gpdk" podUID="32f3dcde-f5c9-4e4f-9205-42916a8cefb8" Oct 8 19:54:01.395566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94657042.mount: Deactivated successfully. Oct 8 19:54:01.476416 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:36032.service - OpenSSH per-connection server daemon (10.0.0.1:36032). Oct 8 19:54:01.514760 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 36032 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:01.516509 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:01.521102 systemd-logind[1458]: New session 11 of user core. Oct 8 19:54:01.527685 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:54:01.751609 sshd[3866]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:01.755890 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:36032.service: Deactivated successfully. Oct 8 19:54:01.758077 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:54:01.758834 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:54:01.759848 systemd-logind[1458]: Removed session 11. Oct 8 19:54:03.074213 containerd[1474]: time="2024-10-08T19:54:03.074123836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:03.077067 containerd[1474]: time="2024-10-08T19:54:03.076966272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 19:54:03.078693 containerd[1474]: time="2024-10-08T19:54:03.078652217Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:03.094860 containerd[1474]: time="2024-10-08T19:54:03.094777210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:03.095744 containerd[1474]: time="2024-10-08T19:54:03.095702029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.080564623s" Oct 8 19:54:03.095807 containerd[1474]: time="2024-10-08T19:54:03.095745542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 19:54:03.106257 containerd[1474]: time="2024-10-08T19:54:03.106206111Z" level=info msg="CreateContainer within sandbox \"8e5b3c9ee4daa75a2f4c65f5e3b4616cb0211cf3b4ef569af4aa257908b580c1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:54:03.138173 containerd[1474]: time="2024-10-08T19:54:03.138115977Z" level=info msg="CreateContainer within sandbox \"8e5b3c9ee4daa75a2f4c65f5e3b4616cb0211cf3b4ef569af4aa257908b580c1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2d86cf4dfa752e67cf587ccb3643790f8ed4e2dcb9b66accd7c5fb974dd34e17\"" Oct 8 19:54:03.138866 containerd[1474]: time="2024-10-08T19:54:03.138817759Z" level=info msg="StartContainer for \"2d86cf4dfa752e67cf587ccb3643790f8ed4e2dcb9b66accd7c5fb974dd34e17\"" Oct 8 19:54:03.206691 systemd[1]: Started cri-containerd-2d86cf4dfa752e67cf587ccb3643790f8ed4e2dcb9b66accd7c5fb974dd34e17.scope - libcontainer container 2d86cf4dfa752e67cf587ccb3643790f8ed4e2dcb9b66accd7c5fb974dd34e17. Oct 8 19:54:03.244080 containerd[1474]: time="2024-10-08T19:54:03.244007272Z" level=info msg="StartContainer for \"2d86cf4dfa752e67cf587ccb3643790f8ed4e2dcb9b66accd7c5fb974dd34e17\" returns successfully" Oct 8 19:54:03.317790 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:54:03.319158 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:54:04.035741 kubelet[2636]: E1008 19:54:04.034963 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:04.853574 kernel: bpftool[4095]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:54:05.034471 kubelet[2636]: E1008 19:54:05.034429 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:05.116356 systemd-networkd[1398]: vxlan.calico: Link UP Oct 8 19:54:05.116367 systemd-networkd[1398]: vxlan.calico: Gained carrier Oct 8 19:54:06.766101 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:36034.service - OpenSSH per-connection server daemon (10.0.0.1:36034). Oct 8 19:54:06.806476 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 36034 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:06.808695 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:06.813522 systemd-logind[1458]: New session 12 of user core. Oct 8 19:54:06.824737 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:54:06.899776 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Oct 8 19:54:07.082935 sshd[4196]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:07.092597 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:36034.service: Deactivated successfully. Oct 8 19:54:07.094476 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:54:07.095928 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:54:07.102911 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:36040.service - OpenSSH per-connection server daemon (10.0.0.1:36040). Oct 8 19:54:07.104060 systemd-logind[1458]: Removed session 12. Oct 8 19:54:07.135513 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 36040 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:07.137971 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:07.143878 systemd-logind[1458]: New session 13 of user core. Oct 8 19:54:07.151717 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:54:07.352516 sshd[4212]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:07.362450 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:36040.service: Deactivated successfully. Oct 8 19:54:07.365141 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:54:07.368781 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:54:07.375979 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:36048.service - OpenSSH per-connection server daemon (10.0.0.1:36048). Oct 8 19:54:07.377225 systemd-logind[1458]: Removed session 13. Oct 8 19:54:07.405980 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 36048 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:07.408342 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:07.413648 systemd-logind[1458]: New session 14 of user core. Oct 8 19:54:07.421905 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:54:07.561222 sshd[4225]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:07.567650 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:36048.service: Deactivated successfully. Oct 8 19:54:07.570288 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:54:07.571076 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:54:07.572418 systemd-logind[1458]: Removed session 14. Oct 8 19:54:07.875443 containerd[1474]: time="2024-10-08T19:54:07.875234290Z" level=info msg="StopPodSandbox for \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\"" Oct 8 19:54:07.936812 kubelet[2636]: I1008 19:54:07.936720 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mg9zh" podStartSLOduration=7.821191999 podStartE2EDuration="23.936421277s" podCreationTimestamp="2024-10-08 19:53:44 +0000 UTC" firstStartedPulling="2024-10-08 19:53:46.981360308 +0000 UTC m=+34.194985238" lastFinishedPulling="2024-10-08 19:54:03.096589586 +0000 UTC m=+50.310214516" observedRunningTime="2024-10-08 19:54:04.09338456 +0000 UTC m=+51.307009490" watchObservedRunningTime="2024-10-08 19:54:07.936421277 +0000 UTC m=+55.150046207" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:07.938 [INFO][4254] k8s.go 608: Cleaning up netns ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:07.938 [INFO][4254] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" iface="eth0" netns="/var/run/netns/cni-ecb89845-d622-7a1b-6dfa-edff3ae88835" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:07.940 [INFO][4254] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" iface="eth0" netns="/var/run/netns/cni-ecb89845-d622-7a1b-6dfa-edff3ae88835" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:07.941 [INFO][4254] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" iface="eth0" netns="/var/run/netns/cni-ecb89845-d622-7a1b-6dfa-edff3ae88835" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:07.941 [INFO][4254] k8s.go 615: Releasing IP address(es) ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:07.941 [INFO][4254] utils.go 188: Calico CNI releasing IP address ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:08.006 [INFO][4262] ipam_plugin.go 417: Releasing address using handleID ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" HandleID="k8s-pod-network.b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:08.007 [INFO][4262] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:08.007 [INFO][4262] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:08.014 [WARNING][4262] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" HandleID="k8s-pod-network.b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:08.014 [INFO][4262] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" HandleID="k8s-pod-network.b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:08.016 [INFO][4262] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:08.022084 containerd[1474]: 2024-10-08 19:54:08.018 [INFO][4254] k8s.go 621: Teardown processing complete. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:08.022835 containerd[1474]: time="2024-10-08T19:54:08.022282253Z" level=info msg="TearDown network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\" successfully" Oct 8 19:54:08.022835 containerd[1474]: time="2024-10-08T19:54:08.022318382Z" level=info msg="StopPodSandbox for \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\" returns successfully" Oct 8 19:54:08.022949 kubelet[2636]: E1008 19:54:08.022914 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:08.024007 containerd[1474]: time="2024-10-08T19:54:08.023960085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5b2v2,Uid:5680a2c6-5726-4676-ad87-66368405db02,Namespace:kube-system,Attempt:1,}" Oct 8 19:54:08.026114 systemd[1]: run-netns-cni\x2decb89845\x2dd622\x2d7a1b\x2d6dfa\x2dedff3ae88835.mount: Deactivated successfully. Oct 8 19:54:08.406373 systemd-networkd[1398]: cali7a3982f92ac: Link UP Oct 8 19:54:08.406737 systemd-networkd[1398]: cali7a3982f92ac: Gained carrier Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.327 [INFO][4270] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0 coredns-7db6d8ff4d- kube-system 5680a2c6-5726-4676-ad87-66368405db02 879 0 2024-10-08 19:53:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-5b2v2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7a3982f92ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5b2v2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5b2v2-" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.327 [INFO][4270] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5b2v2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.359 [INFO][4283] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" HandleID="k8s-pod-network.1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.369 [INFO][4283] ipam_plugin.go 270: Auto assigning IP ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" HandleID="k8s-pod-network.1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d970), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-5b2v2", "timestamp":"2024-10-08 19:54:08.359267968 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.369 [INFO][4283] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.369 [INFO][4283] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.369 [INFO][4283] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.371 [INFO][4283] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" host="localhost" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.377 [INFO][4283] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.382 [INFO][4283] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.384 [INFO][4283] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.386 [INFO][4283] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.386 [INFO][4283] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" host="localhost" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.388 [INFO][4283] ipam.go 1685: Creating new handle: k8s-pod-network.1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.395 [INFO][4283] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" host="localhost" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.400 [INFO][4283] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" host="localhost" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.400 [INFO][4283] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" host="localhost" Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.400 [INFO][4283] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:08.423182 containerd[1474]: 2024-10-08 19:54:08.400 [INFO][4283] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" HandleID="k8s-pod-network.1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.424079 containerd[1474]: 2024-10-08 19:54:08.403 [INFO][4270] k8s.go 386: Populated endpoint ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5b2v2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5680a2c6-5726-4676-ad87-66368405db02", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-5b2v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a3982f92ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:08.424079 containerd[1474]: 2024-10-08 19:54:08.404 [INFO][4270] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5b2v2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.424079 containerd[1474]: 2024-10-08 19:54:08.404 [INFO][4270] dataplane_linux.go 68: Setting the host side veth name to cali7a3982f92ac ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5b2v2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.424079 containerd[1474]: 2024-10-08 19:54:08.406 [INFO][4270] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5b2v2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.424079 containerd[1474]: 2024-10-08 19:54:08.406 [INFO][4270] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5b2v2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5680a2c6-5726-4676-ad87-66368405db02", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c", Pod:"coredns-7db6d8ff4d-5b2v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a3982f92ac", MAC:"32:56:0a:28:31:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:08.424079 containerd[1474]: 2024-10-08 19:54:08.417 [INFO][4270] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5b2v2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:08.467351 containerd[1474]: time="2024-10-08T19:54:08.467222823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:54:08.467351 containerd[1474]: time="2024-10-08T19:54:08.467288228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:54:08.467351 containerd[1474]: time="2024-10-08T19:54:08.467302846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:08.467689 containerd[1474]: time="2024-10-08T19:54:08.467405472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:08.496785 systemd[1]: Started cri-containerd-1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c.scope - libcontainer container 1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c. Oct 8 19:54:08.515902 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:54:08.547926 containerd[1474]: time="2024-10-08T19:54:08.547856610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5b2v2,Uid:5680a2c6-5726-4676-ad87-66368405db02,Namespace:kube-system,Attempt:1,} returns sandbox id \"1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c\"" Oct 8 19:54:08.549099 kubelet[2636]: E1008 19:54:08.549068 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:08.551586 containerd[1474]: time="2024-10-08T19:54:08.551512658Z" level=info msg="CreateContainer within sandbox \"1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:54:08.581960 containerd[1474]: time="2024-10-08T19:54:08.581864609Z" level=info msg="CreateContainer within sandbox \"1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"210949c8b77a0ddb42631e08c622a17f10af6e038f5c8b97dcc61c723fe14a45\"" Oct 8 19:54:08.582863 containerd[1474]: time="2024-10-08T19:54:08.582808090Z" level=info msg="StartContainer for \"210949c8b77a0ddb42631e08c622a17f10af6e038f5c8b97dcc61c723fe14a45\"" Oct 8 19:54:08.616861 systemd[1]: Started cri-containerd-210949c8b77a0ddb42631e08c622a17f10af6e038f5c8b97dcc61c723fe14a45.scope - libcontainer container 210949c8b77a0ddb42631e08c622a17f10af6e038f5c8b97dcc61c723fe14a45. Oct 8 19:54:08.658677 containerd[1474]: time="2024-10-08T19:54:08.658473816Z" level=info msg="StartContainer for \"210949c8b77a0ddb42631e08c622a17f10af6e038f5c8b97dcc61c723fe14a45\" returns successfully" Oct 8 19:54:08.875778 containerd[1474]: time="2024-10-08T19:54:08.875605629Z" level=info msg="StopPodSandbox for \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\"" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.941 [INFO][4400] k8s.go 608: Cleaning up netns ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.942 [INFO][4400] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" iface="eth0" netns="/var/run/netns/cni-36d5fec0-fc56-9601-2835-de033e72441f" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.942 [INFO][4400] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" iface="eth0" netns="/var/run/netns/cni-36d5fec0-fc56-9601-2835-de033e72441f" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.943 [INFO][4400] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" iface="eth0" netns="/var/run/netns/cni-36d5fec0-fc56-9601-2835-de033e72441f" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.943 [INFO][4400] k8s.go 615: Releasing IP address(es) ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.943 [INFO][4400] utils.go 188: Calico CNI releasing IP address ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.989 [INFO][4408] ipam_plugin.go 417: Releasing address using handleID ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" HandleID="k8s-pod-network.17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.989 [INFO][4408] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.989 [INFO][4408] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.995 [WARNING][4408] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" HandleID="k8s-pod-network.17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.995 [INFO][4408] ipam_plugin.go 445: Releasing address using workloadID ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" HandleID="k8s-pod-network.17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:08.996 [INFO][4408] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:09.003333 containerd[1474]: 2024-10-08 19:54:09.000 [INFO][4400] k8s.go 621: Teardown processing complete. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:09.003801 containerd[1474]: time="2024-10-08T19:54:09.003468865Z" level=info msg="TearDown network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\" successfully" Oct 8 19:54:09.003801 containerd[1474]: time="2024-10-08T19:54:09.003498281Z" level=info msg="StopPodSandbox for \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\" returns successfully" Oct 8 19:54:09.004515 containerd[1474]: time="2024-10-08T19:54:09.004336229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9hb6,Uid:a5b65116-575e-4269-8542-d6d284a4cec8,Namespace:calico-system,Attempt:1,}" Oct 8 19:54:09.030256 systemd[1]: run-netns-cni\x2d36d5fec0\x2dfc56\x2d9601\x2d2835\x2dde033e72441f.mount: Deactivated successfully. Oct 8 19:54:09.051698 kubelet[2636]: E1008 19:54:09.051657 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:09.065091 kubelet[2636]: I1008 19:54:09.064854 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5b2v2" podStartSLOduration=42.064832178 podStartE2EDuration="42.064832178s" podCreationTimestamp="2024-10-08 19:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:54:09.064591819 +0000 UTC m=+56.278216749" watchObservedRunningTime="2024-10-08 19:54:09.064832178 +0000 UTC m=+56.278457108" Oct 8 19:54:09.138219 systemd-networkd[1398]: cali6728a4b4dbf: Link UP Oct 8 19:54:09.138717 systemd-networkd[1398]: cali6728a4b4dbf: Gained carrier Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.053 [INFO][4416] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h9hb6-eth0 csi-node-driver- calico-system a5b65116-575e-4269-8542-d6d284a4cec8 895 0 2024-10-08 19:53:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-h9hb6 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali6728a4b4dbf [] []}} ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Namespace="calico-system" Pod="csi-node-driver-h9hb6" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9hb6-" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.053 [INFO][4416] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Namespace="calico-system" Pod="csi-node-driver-h9hb6" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.092 [INFO][4429] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" HandleID="k8s-pod-network.752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.103 [INFO][4429] ipam_plugin.go 270: Auto assigning IP ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" HandleID="k8s-pod-network.752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003641b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h9hb6", "timestamp":"2024-10-08 19:54:09.092449869 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.103 [INFO][4429] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.104 [INFO][4429] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.104 [INFO][4429] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.106 [INFO][4429] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" host="localhost" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.109 [INFO][4429] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.115 [INFO][4429] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.117 [INFO][4429] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.119 [INFO][4429] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.119 [INFO][4429] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" host="localhost" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.121 [INFO][4429] ipam.go 1685: Creating new handle: k8s-pod-network.752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342 Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.125 [INFO][4429] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" host="localhost" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.132 [INFO][4429] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" host="localhost" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.132 [INFO][4429] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" host="localhost" Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.132 [INFO][4429] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:09.156902 containerd[1474]: 2024-10-08 19:54:09.132 [INFO][4429] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" HandleID="k8s-pod-network.752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.157589 containerd[1474]: 2024-10-08 19:54:09.136 [INFO][4416] k8s.go 386: Populated endpoint ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Namespace="calico-system" Pod="csi-node-driver-h9hb6" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9hb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h9hb6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5b65116-575e-4269-8542-d6d284a4cec8", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h9hb6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6728a4b4dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:09.157589 containerd[1474]: 2024-10-08 19:54:09.136 [INFO][4416] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Namespace="calico-system" Pod="csi-node-driver-h9hb6" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.157589 containerd[1474]: 2024-10-08 19:54:09.136 [INFO][4416] dataplane_linux.go 68: Setting the host side veth name to cali6728a4b4dbf ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Namespace="calico-system" Pod="csi-node-driver-h9hb6" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.157589 containerd[1474]: 2024-10-08 19:54:09.139 [INFO][4416] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Namespace="calico-system" Pod="csi-node-driver-h9hb6" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.157589 containerd[1474]: 2024-10-08 19:54:09.139 [INFO][4416] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Namespace="calico-system" Pod="csi-node-driver-h9hb6" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9hb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h9hb6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5b65116-575e-4269-8542-d6d284a4cec8", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342", Pod:"csi-node-driver-h9hb6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6728a4b4dbf", MAC:"9a:18:ab:46:48:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:09.157589 containerd[1474]: 2024-10-08 19:54:09.151 [INFO][4416] k8s.go 500: Wrote updated endpoint to datastore ContainerID="752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342" Namespace="calico-system" Pod="csi-node-driver-h9hb6" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:09.182448 containerd[1474]: time="2024-10-08T19:54:09.182327235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:54:09.182448 containerd[1474]: time="2024-10-08T19:54:09.182402037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:54:09.182448 containerd[1474]: time="2024-10-08T19:54:09.182413930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:09.182859 containerd[1474]: time="2024-10-08T19:54:09.182523088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:09.212751 systemd[1]: Started cri-containerd-752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342.scope - libcontainer container 752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342. Oct 8 19:54:09.243290 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:54:09.263139 containerd[1474]: time="2024-10-08T19:54:09.262970776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9hb6,Uid:a5b65116-575e-4269-8542-d6d284a4cec8,Namespace:calico-system,Attempt:1,} returns sandbox id \"752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342\"" Oct 8 19:54:09.266968 containerd[1474]: time="2024-10-08T19:54:09.266774061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:54:09.875597 containerd[1474]: time="2024-10-08T19:54:09.875497498Z" level=info msg="StopPodSandbox for \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\"" Oct 8 19:54:10.035853 systemd-networkd[1398]: cali7a3982f92ac: Gained IPv6LL Oct 8 19:54:10.057308 kubelet[2636]: E1008 19:54:10.057270 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:10.419724 systemd-networkd[1398]: cali6728a4b4dbf: Gained IPv6LL Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.039 [INFO][4511] k8s.go 608: Cleaning up netns ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.039 [INFO][4511] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" iface="eth0" netns="/var/run/netns/cni-14166890-0f1c-2c39-2467-20fde52e75d3" Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.040 [INFO][4511] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" iface="eth0" netns="/var/run/netns/cni-14166890-0f1c-2c39-2467-20fde52e75d3" Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.040 [INFO][4511] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" iface="eth0" netns="/var/run/netns/cni-14166890-0f1c-2c39-2467-20fde52e75d3" Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.040 [INFO][4511] k8s.go 615: Releasing IP address(es) ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.040 [INFO][4511] utils.go 188: Calico CNI releasing IP address ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.065 [INFO][4518] ipam_plugin.go 417: Releasing address using handleID ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" HandleID="k8s-pod-network.b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.065 [INFO][4518] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.065 [INFO][4518] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.217 [WARNING][4518] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" HandleID="k8s-pod-network.b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.218 [INFO][4518] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" HandleID="k8s-pod-network.b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.425 [INFO][4518] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:10.431945 containerd[1474]: 2024-10-08 19:54:10.429 [INFO][4511] k8s.go 621: Teardown processing complete. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:10.432693 containerd[1474]: time="2024-10-08T19:54:10.432187554Z" level=info msg="TearDown network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\" successfully" Oct 8 19:54:10.432693 containerd[1474]: time="2024-10-08T19:54:10.432226729Z" level=info msg="StopPodSandbox for \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\" returns successfully" Oct 8 19:54:10.433771 containerd[1474]: time="2024-10-08T19:54:10.433729995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f455db588-9gpdk,Uid:32f3dcde-f5c9-4e4f-9205-42916a8cefb8,Namespace:calico-system,Attempt:1,}" Oct 8 19:54:10.435047 systemd[1]: run-netns-cni\x2d14166890\x2d0f1c\x2d2c39\x2d2467\x2d20fde52e75d3.mount: Deactivated successfully. Oct 8 19:54:10.876779 containerd[1474]: time="2024-10-08T19:54:10.876158654Z" level=info msg="StopPodSandbox for \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\"" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:10.973 [INFO][4547] k8s.go 608: Cleaning up netns ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:10.974 [INFO][4547] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" iface="eth0" netns="/var/run/netns/cni-5efd86fa-a148-8f93-2f4c-681c50ccf39c" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:10.974 [INFO][4547] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" iface="eth0" netns="/var/run/netns/cni-5efd86fa-a148-8f93-2f4c-681c50ccf39c" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:10.974 [INFO][4547] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" iface="eth0" netns="/var/run/netns/cni-5efd86fa-a148-8f93-2f4c-681c50ccf39c" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:10.975 [INFO][4547] k8s.go 615: Releasing IP address(es) ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:10.975 [INFO][4547] utils.go 188: Calico CNI releasing IP address ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:11.011 [INFO][4566] ipam_plugin.go 417: Releasing address using handleID ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" HandleID="k8s-pod-network.b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:11.011 [INFO][4566] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:11.011 [INFO][4566] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:11.016 [WARNING][4566] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" HandleID="k8s-pod-network.b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:11.017 [INFO][4566] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" HandleID="k8s-pod-network.b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:11.018 [INFO][4566] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:11.023619 containerd[1474]: 2024-10-08 19:54:11.021 [INFO][4547] k8s.go 621: Teardown processing complete. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:11.024981 containerd[1474]: time="2024-10-08T19:54:11.023926305Z" level=info msg="TearDown network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\" successfully" Oct 8 19:54:11.024981 containerd[1474]: time="2024-10-08T19:54:11.023964488Z" level=info msg="StopPodSandbox for \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\" returns successfully" Oct 8 19:54:11.025139 kubelet[2636]: E1008 19:54:11.024621 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:11.025983 containerd[1474]: time="2024-10-08T19:54:11.025673405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5z9jp,Uid:da500ec5-3c08-4ed4-8012-9ada49d45be0,Namespace:kube-system,Attempt:1,}" Oct 8 19:54:11.028280 systemd[1]: run-netns-cni\x2d5efd86fa\x2da148\x2d8f93\x2d2f4c\x2d681c50ccf39c.mount: Deactivated successfully. Oct 8 19:54:11.059776 kubelet[2636]: E1008 19:54:11.059738 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:11.264852 systemd-networkd[1398]: calif8c3817d11d: Link UP Oct 8 19:54:11.265605 systemd-networkd[1398]: calif8c3817d11d: Gained carrier Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.044 [INFO][4555] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0 calico-kube-controllers-7f455db588- calico-system 32f3dcde-f5c9-4e4f-9205-42916a8cefb8 912 0 2024-10-08 19:53:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f455db588 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f455db588-9gpdk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif8c3817d11d [] []}} ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Namespace="calico-system" Pod="calico-kube-controllers-7f455db588-9gpdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.044 [INFO][4555] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Namespace="calico-system" Pod="calico-kube-controllers-7f455db588-9gpdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.087 [INFO][4576] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" HandleID="k8s-pod-network.5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.169 [INFO][4576] ipam_plugin.go 270: Auto assigning IP ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" HandleID="k8s-pod-network.5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f455db588-9gpdk", "timestamp":"2024-10-08 19:54:11.087113361 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.169 [INFO][4576] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.169 [INFO][4576] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.169 [INFO][4576] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.171 [INFO][4576] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" host="localhost" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.176 [INFO][4576] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.181 [INFO][4576] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.184 [INFO][4576] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.186 [INFO][4576] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.186 [INFO][4576] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" host="localhost" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.188 [INFO][4576] ipam.go 1685: Creating new handle: k8s-pod-network.5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4 Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.209 [INFO][4576] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" host="localhost" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.258 [INFO][4576] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" host="localhost" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.258 [INFO][4576] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" host="localhost" Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.258 [INFO][4576] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:11.349914 containerd[1474]: 2024-10-08 19:54:11.258 [INFO][4576] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" HandleID="k8s-pod-network.5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:11.350522 containerd[1474]: 2024-10-08 19:54:11.262 [INFO][4555] k8s.go 386: Populated endpoint ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Namespace="calico-system" Pod="calico-kube-controllers-7f455db588-9gpdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0", GenerateName:"calico-kube-controllers-7f455db588-", Namespace:"calico-system", SelfLink:"", UID:"32f3dcde-f5c9-4e4f-9205-42916a8cefb8", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f455db588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f455db588-9gpdk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif8c3817d11d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:11.350522 containerd[1474]: 2024-10-08 19:54:11.262 [INFO][4555] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Namespace="calico-system" Pod="calico-kube-controllers-7f455db588-9gpdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:11.350522 containerd[1474]: 2024-10-08 19:54:11.262 [INFO][4555] dataplane_linux.go 68: Setting the host side veth name to calif8c3817d11d ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Namespace="calico-system" Pod="calico-kube-controllers-7f455db588-9gpdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:11.350522 containerd[1474]: 2024-10-08 19:54:11.265 [INFO][4555] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Namespace="calico-system" Pod="calico-kube-controllers-7f455db588-9gpdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:11.350522 containerd[1474]: 2024-10-08 19:54:11.265 [INFO][4555] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Namespace="calico-system" Pod="calico-kube-controllers-7f455db588-9gpdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0", GenerateName:"calico-kube-controllers-7f455db588-", Namespace:"calico-system", SelfLink:"", UID:"32f3dcde-f5c9-4e4f-9205-42916a8cefb8", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f455db588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4", Pod:"calico-kube-controllers-7f455db588-9gpdk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif8c3817d11d", MAC:"e6:85:00:11:11:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:11.350522 containerd[1474]: 2024-10-08 19:54:11.346 [INFO][4555] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4" Namespace="calico-system" Pod="calico-kube-controllers-7f455db588-9gpdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:11.409095 containerd[1474]: time="2024-10-08T19:54:11.408959393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:54:11.409095 containerd[1474]: time="2024-10-08T19:54:11.409059364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:54:11.409095 containerd[1474]: time="2024-10-08T19:54:11.409084541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:11.409355 containerd[1474]: time="2024-10-08T19:54:11.409206804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:11.454888 systemd[1]: Started cri-containerd-5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4.scope - libcontainer container 5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4. Oct 8 19:54:11.505470 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:54:11.583992 containerd[1474]: time="2024-10-08T19:54:11.583760856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f455db588-9gpdk,Uid:32f3dcde-f5c9-4e4f-9205-42916a8cefb8,Namespace:calico-system,Attempt:1,} returns sandbox id \"5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4\"" Oct 8 19:54:11.630801 systemd-networkd[1398]: cali685c4d2e263: Link UP Oct 8 19:54:11.631253 systemd-networkd[1398]: cali685c4d2e263: Gained carrier Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.492 [INFO][4623] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0 coredns-7db6d8ff4d- kube-system da500ec5-3c08-4ed4-8012-9ada49d45be0 916 0 2024-10-08 19:53:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-5z9jp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali685c4d2e263 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5z9jp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5z9jp-" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.493 [INFO][4623] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5z9jp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.543 [INFO][4646] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" HandleID="k8s-pod-network.39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.569 [INFO][4646] ipam_plugin.go 270: Auto assigning IP ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" HandleID="k8s-pod-network.39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305d40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-5z9jp", "timestamp":"2024-10-08 19:54:11.543167776 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.570 [INFO][4646] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.570 [INFO][4646] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.571 [INFO][4646] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.578 [INFO][4646] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" host="localhost" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.589 [INFO][4646] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.598 [INFO][4646] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.600 [INFO][4646] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.605 [INFO][4646] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.605 [INFO][4646] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" host="localhost" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.607 [INFO][4646] ipam.go 1685: Creating new handle: k8s-pod-network.39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411 Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.612 [INFO][4646] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" host="localhost" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.622 [INFO][4646] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" host="localhost" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.622 [INFO][4646] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" host="localhost" Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.622 [INFO][4646] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:11.668054 containerd[1474]: 2024-10-08 19:54:11.623 [INFO][4646] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" HandleID="k8s-pod-network.39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.671593 containerd[1474]: 2024-10-08 19:54:11.626 [INFO][4623] k8s.go 386: Populated endpoint ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5z9jp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"da500ec5-3c08-4ed4-8012-9ada49d45be0", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-5z9jp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali685c4d2e263", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:11.671593 containerd[1474]: 2024-10-08 19:54:11.626 [INFO][4623] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5z9jp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.671593 containerd[1474]: 2024-10-08 19:54:11.626 [INFO][4623] dataplane_linux.go 68: Setting the host side veth name to cali685c4d2e263 ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5z9jp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.671593 containerd[1474]: 2024-10-08 19:54:11.628 [INFO][4623] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5z9jp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.671593 containerd[1474]: 2024-10-08 19:54:11.629 [INFO][4623] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5z9jp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"da500ec5-3c08-4ed4-8012-9ada49d45be0", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411", Pod:"coredns-7db6d8ff4d-5z9jp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali685c4d2e263", MAC:"52:c8:30:5e:9e:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:11.671593 containerd[1474]: 2024-10-08 19:54:11.659 [INFO][4623] k8s.go 500: Wrote updated endpoint to datastore ContainerID="39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5z9jp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:11.726864 containerd[1474]: time="2024-10-08T19:54:11.724409845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:54:11.726864 containerd[1474]: time="2024-10-08T19:54:11.724503304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:54:11.726864 containerd[1474]: time="2024-10-08T19:54:11.725221593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:11.729630 containerd[1474]: time="2024-10-08T19:54:11.729317319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:11.765797 systemd[1]: Started cri-containerd-39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411.scope - libcontainer container 39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411. Oct 8 19:54:11.785390 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:54:11.819558 containerd[1474]: time="2024-10-08T19:54:11.819461456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5z9jp,Uid:da500ec5-3c08-4ed4-8012-9ada49d45be0,Namespace:kube-system,Attempt:1,} returns sandbox id \"39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411\"" Oct 8 19:54:11.820600 kubelet[2636]: E1008 19:54:11.820562 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:11.823650 containerd[1474]: time="2024-10-08T19:54:11.823608830Z" level=info msg="CreateContainer within sandbox \"39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:54:11.849834 containerd[1474]: time="2024-10-08T19:54:11.849674221Z" level=info msg="CreateContainer within sandbox \"39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd96e1cb35947943dbdabca433cf885ebf04da4875b03981c5b17c2324902e63\"" Oct 8 19:54:11.852956 containerd[1474]: time="2024-10-08T19:54:11.851870026Z" level=info msg="StartContainer for \"cd96e1cb35947943dbdabca433cf885ebf04da4875b03981c5b17c2324902e63\"" Oct 8 19:54:11.884745 systemd[1]: Started cri-containerd-cd96e1cb35947943dbdabca433cf885ebf04da4875b03981c5b17c2324902e63.scope - libcontainer container cd96e1cb35947943dbdabca433cf885ebf04da4875b03981c5b17c2324902e63. Oct 8 19:54:11.891151 containerd[1474]: time="2024-10-08T19:54:11.891102905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:11.893061 containerd[1474]: time="2024-10-08T19:54:11.893016222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 19:54:11.896550 containerd[1474]: time="2024-10-08T19:54:11.894620058Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:11.899285 containerd[1474]: time="2024-10-08T19:54:11.899255655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:11.902552 containerd[1474]: time="2024-10-08T19:54:11.900245491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.633421124s" Oct 8 19:54:11.902552 containerd[1474]: time="2024-10-08T19:54:11.900288623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 19:54:11.902552 containerd[1474]: time="2024-10-08T19:54:11.901999224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:54:11.904043 containerd[1474]: time="2024-10-08T19:54:11.903940414Z" level=info msg="CreateContainer within sandbox \"752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:54:11.986840 containerd[1474]: time="2024-10-08T19:54:11.986499141Z" level=info msg="StartContainer for \"cd96e1cb35947943dbdabca433cf885ebf04da4875b03981c5b17c2324902e63\" returns successfully" Oct 8 19:54:11.992116 containerd[1474]: time="2024-10-08T19:54:11.992051414Z" level=info msg="CreateContainer within sandbox \"752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"55c150c2b5fa5adedd134b1cddd33bed7d0df68f1c43b29a7c79d2d530332384\"" Oct 8 19:54:11.993955 containerd[1474]: time="2024-10-08T19:54:11.993915958Z" level=info msg="StartContainer for \"55c150c2b5fa5adedd134b1cddd33bed7d0df68f1c43b29a7c79d2d530332384\"" Oct 8 19:54:12.039820 systemd[1]: Started cri-containerd-55c150c2b5fa5adedd134b1cddd33bed7d0df68f1c43b29a7c79d2d530332384.scope - libcontainer container 55c150c2b5fa5adedd134b1cddd33bed7d0df68f1c43b29a7c79d2d530332384. Oct 8 19:54:12.071570 kubelet[2636]: E1008 19:54:12.069187 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:12.086800 kubelet[2636]: I1008 19:54:12.086717 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5z9jp" podStartSLOduration=45.086689955 podStartE2EDuration="45.086689955s" podCreationTimestamp="2024-10-08 19:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:54:12.086617536 +0000 UTC m=+59.300242456" watchObservedRunningTime="2024-10-08 19:54:12.086689955 +0000 UTC m=+59.300314885" Oct 8 19:54:12.153687 containerd[1474]: time="2024-10-08T19:54:12.153500157Z" level=info msg="StartContainer for \"55c150c2b5fa5adedd134b1cddd33bed7d0df68f1c43b29a7c79d2d530332384\" returns successfully" Oct 8 19:54:12.403805 systemd-networkd[1398]: calif8c3817d11d: Gained IPv6LL Oct 8 19:54:12.576187 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:48230.service - OpenSSH per-connection server daemon (10.0.0.1:48230). Oct 8 19:54:12.617815 sshd[4802]: Accepted publickey for core from 10.0.0.1 port 48230 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:12.619790 sshd[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:12.623974 systemd-logind[1458]: New session 15 of user core. Oct 8 19:54:12.631725 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:54:12.750683 sshd[4802]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:12.754454 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:48230.service: Deactivated successfully. Oct 8 19:54:12.756884 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:54:12.757581 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:54:12.758474 systemd-logind[1458]: Removed session 15. Oct 8 19:54:12.863956 containerd[1474]: time="2024-10-08T19:54:12.863904901Z" level=info msg="StopPodSandbox for \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\"" Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.901 [WARNING][4831] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"da500ec5-3c08-4ed4-8012-9ada49d45be0", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411", Pod:"coredns-7db6d8ff4d-5z9jp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali685c4d2e263", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.902 [INFO][4831] k8s.go 608: Cleaning up netns ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.902 [INFO][4831] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" iface="eth0" netns="" Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.902 [INFO][4831] k8s.go 615: Releasing IP address(es) ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.902 [INFO][4831] utils.go 188: Calico CNI releasing IP address ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.929 [INFO][4841] ipam_plugin.go 417: Releasing address using handleID ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" HandleID="k8s-pod-network.b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.929 [INFO][4841] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.929 [INFO][4841] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.936 [WARNING][4841] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" HandleID="k8s-pod-network.b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.936 [INFO][4841] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" HandleID="k8s-pod-network.b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.939 [INFO][4841] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:12.945039 containerd[1474]: 2024-10-08 19:54:12.942 [INFO][4831] k8s.go 621: Teardown processing complete. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:12.945039 containerd[1474]: time="2024-10-08T19:54:12.944853240Z" level=info msg="TearDown network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\" successfully" Oct 8 19:54:12.945039 containerd[1474]: time="2024-10-08T19:54:12.944888608Z" level=info msg="StopPodSandbox for \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\" returns successfully" Oct 8 19:54:12.951921 containerd[1474]: time="2024-10-08T19:54:12.951868476Z" level=info msg="RemovePodSandbox for \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\"" Oct 8 19:54:12.954062 containerd[1474]: time="2024-10-08T19:54:12.954010397Z" level=info msg="Forcibly stopping sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\"" Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:12.996 [WARNING][4864] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"da500ec5-3c08-4ed4-8012-9ada49d45be0", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39bbc6609b78e05a765806f0f75545a6e039abdfb2881affb6c2ca95ab221411", Pod:"coredns-7db6d8ff4d-5z9jp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali685c4d2e263", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:12.996 [INFO][4864] k8s.go 608: Cleaning up netns ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:12.996 [INFO][4864] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" iface="eth0" netns="" Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:12.996 [INFO][4864] k8s.go 615: Releasing IP address(es) ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:12.996 [INFO][4864] utils.go 188: Calico CNI releasing IP address ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:13.023 [INFO][4872] ipam_plugin.go 417: Releasing address using handleID ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" HandleID="k8s-pod-network.b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:13.023 [INFO][4872] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:13.023 [INFO][4872] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:13.028 [WARNING][4872] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" HandleID="k8s-pod-network.b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:13.028 [INFO][4872] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" HandleID="k8s-pod-network.b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Workload="localhost-k8s-coredns--7db6d8ff4d--5z9jp-eth0" Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:13.032 [INFO][4872] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:13.037668 containerd[1474]: 2024-10-08 19:54:13.034 [INFO][4864] k8s.go 621: Teardown processing complete. ContainerID="b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587" Oct 8 19:54:13.037668 containerd[1474]: time="2024-10-08T19:54:13.037630401Z" level=info msg="TearDown network for sandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\" successfully" Oct 8 19:54:13.047487 containerd[1474]: time="2024-10-08T19:54:13.047428292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:54:13.047680 containerd[1474]: time="2024-10-08T19:54:13.047517102Z" level=info msg="RemovePodSandbox \"b7c4f194bdcf2b5384e5bf81497cb96ab48930b511bded55f0de3db4b15a2587\" returns successfully" Oct 8 19:54:13.048581 containerd[1474]: time="2024-10-08T19:54:13.048168032Z" level=info msg="StopPodSandbox for \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\"" Oct 8 19:54:13.048581 containerd[1474]: time="2024-10-08T19:54:13.048293490Z" level=info msg="TearDown network for sandbox \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\" successfully" Oct 8 19:54:13.048581 containerd[1474]: time="2024-10-08T19:54:13.048306706Z" level=info msg="StopPodSandbox for \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\" returns successfully" Oct 8 19:54:13.048733 containerd[1474]: time="2024-10-08T19:54:13.048660139Z" level=info msg="RemovePodSandbox for \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\"" Oct 8 19:54:13.048733 containerd[1474]: time="2024-10-08T19:54:13.048687801Z" level=info msg="Forcibly stopping sandbox \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\"" Oct 8 19:54:13.048780 containerd[1474]: time="2024-10-08T19:54:13.048764998Z" level=info msg="TearDown network for sandbox \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\" successfully" Oct 8 19:54:13.053372 containerd[1474]: time="2024-10-08T19:54:13.053297751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:54:13.053505 containerd[1474]: time="2024-10-08T19:54:13.053381170Z" level=info msg="RemovePodSandbox \"779161b244c41406243cd23b6fd01aba69498a1dd174e9b245f83d9b5408eb6f\" returns successfully" Oct 8 19:54:13.053788 containerd[1474]: time="2024-10-08T19:54:13.053764541Z" level=info msg="StopPodSandbox for \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\"" Oct 8 19:54:13.094739 kubelet[2636]: E1008 19:54:13.094677 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.095 [WARNING][4895] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0", GenerateName:"calico-kube-controllers-7f455db588-", Namespace:"calico-system", SelfLink:"", UID:"32f3dcde-f5c9-4e4f-9205-42916a8cefb8", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f455db588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4", Pod:"calico-kube-controllers-7f455db588-9gpdk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif8c3817d11d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.095 [INFO][4895] k8s.go 608: Cleaning up netns ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.095 [INFO][4895] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" iface="eth0" netns="" Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.095 [INFO][4895] k8s.go 615: Releasing IP address(es) ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.095 [INFO][4895] utils.go 188: Calico CNI releasing IP address ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.118 [INFO][4903] ipam_plugin.go 417: Releasing address using handleID ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" HandleID="k8s-pod-network.b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.118 [INFO][4903] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.118 [INFO][4903] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.124 [WARNING][4903] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" HandleID="k8s-pod-network.b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.124 [INFO][4903] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" HandleID="k8s-pod-network.b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.125 [INFO][4903] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:13.131916 containerd[1474]: 2024-10-08 19:54:13.128 [INFO][4895] k8s.go 621: Teardown processing complete. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:13.132608 containerd[1474]: time="2024-10-08T19:54:13.131944710Z" level=info msg="TearDown network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\" successfully" Oct 8 19:54:13.132608 containerd[1474]: time="2024-10-08T19:54:13.131971411Z" level=info msg="StopPodSandbox for \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\" returns successfully" Oct 8 19:54:13.132608 containerd[1474]: time="2024-10-08T19:54:13.132487745Z" level=info msg="RemovePodSandbox for \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\"" Oct 8 19:54:13.132608 containerd[1474]: time="2024-10-08T19:54:13.132518834Z" level=info msg="Forcibly stopping sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\"" Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.171 [WARNING][4927] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0", GenerateName:"calico-kube-controllers-7f455db588-", Namespace:"calico-system", SelfLink:"", UID:"32f3dcde-f5c9-4e4f-9205-42916a8cefb8", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f455db588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4", Pod:"calico-kube-controllers-7f455db588-9gpdk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif8c3817d11d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.171 [INFO][4927] k8s.go 608: Cleaning up netns ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.171 [INFO][4927] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" iface="eth0" netns="" Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.171 [INFO][4927] k8s.go 615: Releasing IP address(es) ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.171 [INFO][4927] utils.go 188: Calico CNI releasing IP address ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.202 [INFO][4935] ipam_plugin.go 417: Releasing address using handleID ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" HandleID="k8s-pod-network.b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.202 [INFO][4935] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.203 [INFO][4935] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.209 [WARNING][4935] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" HandleID="k8s-pod-network.b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.209 [INFO][4935] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" HandleID="k8s-pod-network.b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Workload="localhost-k8s-calico--kube--controllers--7f455db588--9gpdk-eth0" Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.211 [INFO][4935] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:13.227771 containerd[1474]: 2024-10-08 19:54:13.217 [INFO][4927] k8s.go 621: Teardown processing complete. ContainerID="b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720" Oct 8 19:54:13.228413 containerd[1474]: time="2024-10-08T19:54:13.228167153Z" level=info msg="TearDown network for sandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\" successfully" Oct 8 19:54:13.247190 containerd[1474]: time="2024-10-08T19:54:13.246772459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:54:13.247310 containerd[1474]: time="2024-10-08T19:54:13.247207828Z" level=info msg="RemovePodSandbox \"b282f7744da7fc5187df310f8ac11061bf31722d7ea3177d0dd31321e6db3720\" returns successfully" Oct 8 19:54:13.247841 containerd[1474]: time="2024-10-08T19:54:13.247805887Z" level=info msg="StopPodSandbox for \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\"" Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.294 [WARNING][4958] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h9hb6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5b65116-575e-4269-8542-d6d284a4cec8", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342", Pod:"csi-node-driver-h9hb6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6728a4b4dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.294 [INFO][4958] k8s.go 608: Cleaning up netns ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.294 [INFO][4958] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" iface="eth0" netns="" Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.294 [INFO][4958] k8s.go 615: Releasing IP address(es) ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.294 [INFO][4958] utils.go 188: Calico CNI releasing IP address ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.318 [INFO][4966] ipam_plugin.go 417: Releasing address using handleID ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" HandleID="k8s-pod-network.17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.318 [INFO][4966] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.319 [INFO][4966] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.324 [WARNING][4966] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" HandleID="k8s-pod-network.17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.324 [INFO][4966] ipam_plugin.go 445: Releasing address using workloadID ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" HandleID="k8s-pod-network.17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.326 [INFO][4966] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:13.332350 containerd[1474]: 2024-10-08 19:54:13.328 [INFO][4958] k8s.go 621: Teardown processing complete. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:13.332916 containerd[1474]: time="2024-10-08T19:54:13.332416615Z" level=info msg="TearDown network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\" successfully" Oct 8 19:54:13.332916 containerd[1474]: time="2024-10-08T19:54:13.332455339Z" level=info msg="StopPodSandbox for \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\" returns successfully" Oct 8 19:54:13.333205 containerd[1474]: time="2024-10-08T19:54:13.333154591Z" level=info msg="RemovePodSandbox for \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\"" Oct 8 19:54:13.333205 containerd[1474]: time="2024-10-08T19:54:13.333200999Z" level=info msg="Forcibly stopping sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\"" Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.373 [WARNING][4989] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h9hb6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5b65116-575e-4269-8542-d6d284a4cec8", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342", Pod:"csi-node-driver-h9hb6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6728a4b4dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.374 [INFO][4989] k8s.go 608: Cleaning up netns ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.374 [INFO][4989] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" iface="eth0" netns="" Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.374 [INFO][4989] k8s.go 615: Releasing IP address(es) ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.374 [INFO][4989] utils.go 188: Calico CNI releasing IP address ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.438 [INFO][4996] ipam_plugin.go 417: Releasing address using handleID ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" HandleID="k8s-pod-network.17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.439 [INFO][4996] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.439 [INFO][4996] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.445 [WARNING][4996] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" HandleID="k8s-pod-network.17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.446 [INFO][4996] ipam_plugin.go 445: Releasing address using workloadID ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" HandleID="k8s-pod-network.17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Workload="localhost-k8s-csi--node--driver--h9hb6-eth0" Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.448 [INFO][4996] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:13.456669 containerd[1474]: 2024-10-08 19:54:13.453 [INFO][4989] k8s.go 621: Teardown processing complete. ContainerID="17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88" Oct 8 19:54:13.457546 containerd[1474]: time="2024-10-08T19:54:13.456708709Z" level=info msg="TearDown network for sandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\" successfully" Oct 8 19:54:13.461863 containerd[1474]: time="2024-10-08T19:54:13.461786350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:54:13.462168 containerd[1474]: time="2024-10-08T19:54:13.461889847Z" level=info msg="RemovePodSandbox \"17882a616c3c2f0ff131eb6a40c6acae612e3cf61fef021ccd9f8189ec889b88\" returns successfully" Oct 8 19:54:13.462841 containerd[1474]: time="2024-10-08T19:54:13.462804439Z" level=info msg="StopPodSandbox for \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\"" Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.505 [WARNING][5023] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5680a2c6-5726-4676-ad87-66368405db02", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c", Pod:"coredns-7db6d8ff4d-5b2v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a3982f92ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.506 [INFO][5023] k8s.go 608: Cleaning up netns ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.506 [INFO][5023] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" iface="eth0" netns="" Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.506 [INFO][5023] k8s.go 615: Releasing IP address(es) ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.506 [INFO][5023] utils.go 188: Calico CNI releasing IP address ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.539 [INFO][5031] ipam_plugin.go 417: Releasing address using handleID ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" HandleID="k8s-pod-network.b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.539 [INFO][5031] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.539 [INFO][5031] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.545 [WARNING][5031] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" HandleID="k8s-pod-network.b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.545 [INFO][5031] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" HandleID="k8s-pod-network.b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.548 [INFO][5031] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:13.555596 containerd[1474]: 2024-10-08 19:54:13.551 [INFO][5023] k8s.go 621: Teardown processing complete. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:13.555596 containerd[1474]: time="2024-10-08T19:54:13.555384445Z" level=info msg="TearDown network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\" successfully" Oct 8 19:54:13.555596 containerd[1474]: time="2024-10-08T19:54:13.555420974Z" level=info msg="StopPodSandbox for \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\" returns successfully" Oct 8 19:54:13.556127 systemd-networkd[1398]: cali685c4d2e263: Gained IPv6LL Oct 8 19:54:13.557192 containerd[1474]: time="2024-10-08T19:54:13.556785463Z" level=info msg="RemovePodSandbox for \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\"" Oct 8 19:54:13.557192 containerd[1474]: time="2024-10-08T19:54:13.556835359Z" level=info msg="Forcibly stopping sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\"" Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.606 [WARNING][5054] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5680a2c6-5726-4676-ad87-66368405db02", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b0ed9d2df4df45d9a948589285a61c67fd6d57d99b7ee8226a5ba23c481b66c", Pod:"coredns-7db6d8ff4d-5b2v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a3982f92ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.606 [INFO][5054] k8s.go 608: Cleaning up netns ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.606 [INFO][5054] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" iface="eth0" netns="" Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.606 [INFO][5054] k8s.go 615: Releasing IP address(es) ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.606 [INFO][5054] utils.go 188: Calico CNI releasing IP address ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.635 [INFO][5062] ipam_plugin.go 417: Releasing address using handleID ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" HandleID="k8s-pod-network.b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.635 [INFO][5062] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.635 [INFO][5062] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.641 [WARNING][5062] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" HandleID="k8s-pod-network.b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.641 [INFO][5062] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" HandleID="k8s-pod-network.b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Workload="localhost-k8s-coredns--7db6d8ff4d--5b2v2-eth0" Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.643 [INFO][5062] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:13.649235 containerd[1474]: 2024-10-08 19:54:13.646 [INFO][5054] k8s.go 621: Teardown processing complete. ContainerID="b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375" Oct 8 19:54:13.649235 containerd[1474]: time="2024-10-08T19:54:13.649212488Z" level=info msg="TearDown network for sandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\" successfully" Oct 8 19:54:13.663067 containerd[1474]: time="2024-10-08T19:54:13.662808793Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:54:13.663067 containerd[1474]: time="2024-10-08T19:54:13.662918471Z" level=info msg="RemovePodSandbox \"b868d12866fab100c69427e7286b6b4b54f51e184ecf561480361db57defe375\" returns successfully" Oct 8 19:54:13.848803 containerd[1474]: time="2024-10-08T19:54:13.848727940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:13.872669 containerd[1474]: time="2024-10-08T19:54:13.872565130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 19:54:13.931882 containerd[1474]: time="2024-10-08T19:54:13.931660342Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:13.968586 containerd[1474]: time="2024-10-08T19:54:13.968415374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:13.969401 containerd[1474]: time="2024-10-08T19:54:13.969355364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.067324551s" Oct 8 19:54:13.969401 containerd[1474]: time="2024-10-08T19:54:13.969394570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 19:54:13.970586 containerd[1474]: time="2024-10-08T19:54:13.970557655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:54:13.979334 containerd[1474]: time="2024-10-08T19:54:13.979281770Z" level=info msg="CreateContainer within sandbox \"5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:54:14.009594 containerd[1474]: time="2024-10-08T19:54:14.009423522Z" level=info msg="CreateContainer within sandbox \"5921402fabbaf7f6bf6eb56e04dc02af6516a4003252566d5e539ed0726ea1d4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ef6a405065a32bd6de3a4d9acac27e2dbeb72631e76b5e635511fbdd6845a800\"" Oct 8 19:54:14.010384 containerd[1474]: time="2024-10-08T19:54:14.010324888Z" level=info msg="StartContainer for \"ef6a405065a32bd6de3a4d9acac27e2dbeb72631e76b5e635511fbdd6845a800\"" Oct 8 19:54:14.052833 systemd[1]: Started cri-containerd-ef6a405065a32bd6de3a4d9acac27e2dbeb72631e76b5e635511fbdd6845a800.scope - libcontainer container ef6a405065a32bd6de3a4d9acac27e2dbeb72631e76b5e635511fbdd6845a800. Oct 8 19:54:14.101435 kubelet[2636]: E1008 19:54:14.101240 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:14.519505 containerd[1474]: time="2024-10-08T19:54:14.519436632Z" level=info msg="StartContainer for \"ef6a405065a32bd6de3a4d9acac27e2dbeb72631e76b5e635511fbdd6845a800\" returns successfully" Oct 8 19:54:15.169896 kubelet[2636]: I1008 19:54:15.169770 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f455db588-9gpdk" podStartSLOduration=37.786957994 podStartE2EDuration="40.169748036s" podCreationTimestamp="2024-10-08 19:53:35 +0000 UTC" firstStartedPulling="2024-10-08 19:54:11.587569125 +0000 UTC m=+58.801194055" lastFinishedPulling="2024-10-08 19:54:13.970359137 +0000 UTC m=+61.183984097" observedRunningTime="2024-10-08 19:54:15.169441702 +0000 UTC m=+62.383066622" watchObservedRunningTime="2024-10-08 19:54:15.169748036 +0000 UTC m=+62.383372966" Oct 8 19:54:15.344655 systemd[1]: run-containerd-runc-k8s.io-2d86cf4dfa752e67cf587ccb3643790f8ed4e2dcb9b66accd7c5fb974dd34e17-runc.bjA76K.mount: Deactivated successfully. Oct 8 19:54:15.408707 kubelet[2636]: E1008 19:54:15.408671 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:17.762946 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:48242.service - OpenSSH per-connection server daemon (10.0.0.1:48242). Oct 8 19:54:17.821329 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 48242 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:17.823486 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:17.828641 systemd-logind[1458]: New session 16 of user core. Oct 8 19:54:17.835950 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:54:17.975258 sshd[5169]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:17.980751 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:48242.service: Deactivated successfully. Oct 8 19:54:17.983230 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:54:17.984064 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:54:17.985197 systemd-logind[1458]: Removed session 16. Oct 8 19:54:18.012118 containerd[1474]: time="2024-10-08T19:54:18.012035770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:18.077985 containerd[1474]: time="2024-10-08T19:54:18.077869754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 19:54:18.112405 containerd[1474]: time="2024-10-08T19:54:18.112321820Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:18.228898 containerd[1474]: time="2024-10-08T19:54:18.228814094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:18.230023 containerd[1474]: time="2024-10-08T19:54:18.229977015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 4.259384244s" Oct 8 19:54:18.230107 containerd[1474]: time="2024-10-08T19:54:18.230027802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 19:54:18.232539 containerd[1474]: time="2024-10-08T19:54:18.232487218Z" level=info msg="CreateContainer within sandbox \"752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:54:19.525623 containerd[1474]: time="2024-10-08T19:54:19.525554383Z" level=info msg="CreateContainer within sandbox \"752f4d96e2763e28759fc22a900a21fa73604e88f106a5a475eaf3c04b256342\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7d91948d177c7f54c28c4c48d3180b064ff91351b2486301f28073ea0ca2f878\"" Oct 8 19:54:19.526268 containerd[1474]: time="2024-10-08T19:54:19.526218445Z" level=info msg="StartContainer for \"7d91948d177c7f54c28c4c48d3180b064ff91351b2486301f28073ea0ca2f878\"" Oct 8 19:54:19.569725 systemd[1]: Started cri-containerd-7d91948d177c7f54c28c4c48d3180b064ff91351b2486301f28073ea0ca2f878.scope - libcontainer container 7d91948d177c7f54c28c4c48d3180b064ff91351b2486301f28073ea0ca2f878. Oct 8 19:54:19.692947 containerd[1474]: time="2024-10-08T19:54:19.692864221Z" level=info msg="StartContainer for \"7d91948d177c7f54c28c4c48d3180b064ff91351b2486301f28073ea0ca2f878\" returns successfully" Oct 8 19:54:19.964402 kubelet[2636]: I1008 19:54:19.964347 2636 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:54:19.964402 kubelet[2636]: I1008 19:54:19.964398 2636 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:54:20.725022 kubelet[2636]: I1008 19:54:20.724487 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h9hb6" podStartSLOduration=36.758575126 podStartE2EDuration="45.724466181s" podCreationTimestamp="2024-10-08 19:53:35 +0000 UTC" firstStartedPulling="2024-10-08 19:54:09.265202293 +0000 UTC m=+56.478827223" lastFinishedPulling="2024-10-08 19:54:18.231093348 +0000 UTC m=+65.444718278" observedRunningTime="2024-10-08 19:54:20.723314323 +0000 UTC m=+67.936939253" watchObservedRunningTime="2024-10-08 19:54:20.724466181 +0000 UTC m=+67.938091121" Oct 8 19:54:22.990277 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:41294.service - OpenSSH per-connection server daemon (10.0.0.1:41294). Oct 8 19:54:23.036874 sshd[5230]: Accepted publickey for core from 10.0.0.1 port 41294 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:23.039022 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:23.043454 systemd-logind[1458]: New session 17 of user core. Oct 8 19:54:23.049668 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:54:23.177725 sshd[5230]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:23.183100 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:41294.service: Deactivated successfully. Oct 8 19:54:23.185358 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:54:23.186121 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:54:23.187348 systemd-logind[1458]: Removed session 17. Oct 8 19:54:23.875428 kubelet[2636]: E1008 19:54:23.875374 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:28.191068 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:41304.service - OpenSSH per-connection server daemon (10.0.0.1:41304). Oct 8 19:54:28.237077 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 41304 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:28.239310 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:28.244395 systemd-logind[1458]: New session 18 of user core. Oct 8 19:54:28.254708 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:54:28.369407 sshd[5275]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:28.380066 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:41304.service: Deactivated successfully. Oct 8 19:54:28.382145 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:54:28.384694 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:54:28.391510 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:41316.service - OpenSSH per-connection server daemon (10.0.0.1:41316). Oct 8 19:54:28.392804 systemd-logind[1458]: Removed session 18. Oct 8 19:54:28.422638 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 41316 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:28.424588 sshd[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:28.428939 systemd-logind[1458]: New session 19 of user core. Oct 8 19:54:28.441740 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:54:28.721093 sshd[5290]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:28.732580 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:41316.service: Deactivated successfully. Oct 8 19:54:28.734772 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:54:28.736411 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:54:28.741847 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:41326.service - OpenSSH per-connection server daemon (10.0.0.1:41326). Oct 8 19:54:28.742917 systemd-logind[1458]: Removed session 19. Oct 8 19:54:28.773660 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 41326 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:28.775681 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:28.780201 systemd-logind[1458]: New session 20 of user core. Oct 8 19:54:28.792762 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:54:30.202933 kubelet[2636]: I1008 19:54:30.202860 2636 topology_manager.go:215] "Topology Admit Handler" podUID="05a055bd-9383-427c-bb93-fd25aaf3dc55" podNamespace="calico-apiserver" podName="calico-apiserver-6986675545-7tvvv" Oct 8 19:54:30.210021 kubelet[2636]: W1008 19:54:30.209898 2636 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Oct 8 19:54:30.210021 kubelet[2636]: E1008 19:54:30.209972 2636 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Oct 8 19:54:30.210386 kubelet[2636]: W1008 19:54:30.210321 2636 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Oct 8 19:54:30.210386 kubelet[2636]: E1008 19:54:30.210349 2636 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Oct 8 19:54:30.226122 systemd[1]: Created slice kubepods-besteffort-pod05a055bd_9383_427c_bb93_fd25aaf3dc55.slice - libcontainer container kubepods-besteffort-pod05a055bd_9383_427c_bb93_fd25aaf3dc55.slice. Oct 8 19:54:30.391290 kubelet[2636]: I1008 19:54:30.391222 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8mq9\" (UniqueName: \"kubernetes.io/projected/05a055bd-9383-427c-bb93-fd25aaf3dc55-kube-api-access-m8mq9\") pod \"calico-apiserver-6986675545-7tvvv\" (UID: \"05a055bd-9383-427c-bb93-fd25aaf3dc55\") " pod="calico-apiserver/calico-apiserver-6986675545-7tvvv" Oct 8 19:54:30.391290 kubelet[2636]: I1008 19:54:30.391285 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/05a055bd-9383-427c-bb93-fd25aaf3dc55-calico-apiserver-certs\") pod \"calico-apiserver-6986675545-7tvvv\" (UID: \"05a055bd-9383-427c-bb93-fd25aaf3dc55\") " pod="calico-apiserver/calico-apiserver-6986675545-7tvvv" Oct 8 19:54:30.568718 sshd[5305]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:30.591035 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:33700.service - OpenSSH per-connection server daemon (10.0.0.1:33700). Oct 8 19:54:30.591860 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:41326.service: Deactivated successfully. Oct 8 19:54:30.598106 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:54:30.600642 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:54:30.603984 systemd-logind[1458]: Removed session 20. Oct 8 19:54:30.635988 sshd[5324]: Accepted publickey for core from 10.0.0.1 port 33700 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:30.637957 sshd[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:30.642864 systemd-logind[1458]: New session 21 of user core. Oct 8 19:54:30.655788 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:54:30.878771 kubelet[2636]: E1008 19:54:30.877711 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:30.905277 sshd[5324]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:30.915905 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:33700.service: Deactivated successfully. Oct 8 19:54:30.918834 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:54:30.920796 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:54:30.927954 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:33714.service - OpenSSH per-connection server daemon (10.0.0.1:33714). Oct 8 19:54:30.929251 systemd-logind[1458]: Removed session 21. Oct 8 19:54:30.959031 sshd[5339]: Accepted publickey for core from 10.0.0.1 port 33714 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:30.962485 sshd[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:30.969577 systemd-logind[1458]: New session 22 of user core. Oct 8 19:54:30.976853 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:54:31.125065 sshd[5339]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:31.131216 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:33714.service: Deactivated successfully. Oct 8 19:54:31.134409 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:54:31.135232 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:54:31.136393 systemd-logind[1458]: Removed session 22. Oct 8 19:54:31.503049 kubelet[2636]: E1008 19:54:31.502731 2636 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:54:31.503049 kubelet[2636]: E1008 19:54:31.502797 2636 projected.go:200] Error preparing data for projected volume kube-api-access-m8mq9 for pod calico-apiserver/calico-apiserver-6986675545-7tvvv: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:54:31.503049 kubelet[2636]: E1008 19:54:31.502901 2636 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/05a055bd-9383-427c-bb93-fd25aaf3dc55-kube-api-access-m8mq9 podName:05a055bd-9383-427c-bb93-fd25aaf3dc55 nodeName:}" failed. No retries permitted until 2024-10-08 19:54:32.002873355 +0000 UTC m=+79.216498285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m8mq9" (UniqueName: "kubernetes.io/projected/05a055bd-9383-427c-bb93-fd25aaf3dc55-kube-api-access-m8mq9") pod "calico-apiserver-6986675545-7tvvv" (UID: "05a055bd-9383-427c-bb93-fd25aaf3dc55") : failed to sync configmap cache: timed out waiting for the condition Oct 8 19:54:32.030794 containerd[1474]: time="2024-10-08T19:54:32.030742359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6986675545-7tvvv,Uid:05a055bd-9383-427c-bb93-fd25aaf3dc55,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:54:32.160087 systemd-networkd[1398]: cali8682bbd936f: Link UP Oct 8 19:54:32.161769 systemd-networkd[1398]: cali8682bbd936f: Gained carrier Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.082 [INFO][5359] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0 calico-apiserver-6986675545- calico-apiserver 05a055bd-9383-427c-bb93-fd25aaf3dc55 1102 0 2024-10-08 19:54:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6986675545 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6986675545-7tvvv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8682bbd936f [] []}} ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Namespace="calico-apiserver" Pod="calico-apiserver-6986675545-7tvvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6986675545--7tvvv-" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.082 [INFO][5359] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Namespace="calico-apiserver" Pod="calico-apiserver-6986675545-7tvvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.117 [INFO][5372] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" HandleID="k8s-pod-network.ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Workload="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.126 [INFO][5372] ipam_plugin.go 270: Auto assigning IP ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" HandleID="k8s-pod-network.ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Workload="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6986675545-7tvvv", "timestamp":"2024-10-08 19:54:32.117705104 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.126 [INFO][5372] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.126 [INFO][5372] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.126 [INFO][5372] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.129 [INFO][5372] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" host="localhost" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.133 [INFO][5372] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.137 [INFO][5372] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.139 [INFO][5372] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.141 [INFO][5372] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.141 [INFO][5372] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" host="localhost" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.142 [INFO][5372] ipam.go 1685: Creating new handle: k8s-pod-network.ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008 Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.146 [INFO][5372] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" host="localhost" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.153 [INFO][5372] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" host="localhost" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.153 [INFO][5372] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" host="localhost" Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.153 [INFO][5372] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:32.174435 containerd[1474]: 2024-10-08 19:54:32.153 [INFO][5372] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" HandleID="k8s-pod-network.ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Workload="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" Oct 8 19:54:32.175254 containerd[1474]: 2024-10-08 19:54:32.156 [INFO][5359] k8s.go 386: Populated endpoint ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Namespace="calico-apiserver" Pod="calico-apiserver-6986675545-7tvvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0", GenerateName:"calico-apiserver-6986675545-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a055bd-9383-427c-bb93-fd25aaf3dc55", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 54, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6986675545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6986675545-7tvvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8682bbd936f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:32.175254 containerd[1474]: 2024-10-08 19:54:32.156 [INFO][5359] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Namespace="calico-apiserver" Pod="calico-apiserver-6986675545-7tvvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" Oct 8 19:54:32.175254 containerd[1474]: 2024-10-08 19:54:32.156 [INFO][5359] dataplane_linux.go 68: Setting the host side veth name to cali8682bbd936f ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Namespace="calico-apiserver" Pod="calico-apiserver-6986675545-7tvvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" Oct 8 19:54:32.175254 containerd[1474]: 2024-10-08 19:54:32.159 [INFO][5359] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Namespace="calico-apiserver" Pod="calico-apiserver-6986675545-7tvvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" Oct 8 19:54:32.175254 containerd[1474]: 2024-10-08 19:54:32.160 [INFO][5359] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Namespace="calico-apiserver" Pod="calico-apiserver-6986675545-7tvvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0", GenerateName:"calico-apiserver-6986675545-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a055bd-9383-427c-bb93-fd25aaf3dc55", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 54, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6986675545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008", Pod:"calico-apiserver-6986675545-7tvvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8682bbd936f", MAC:"06:be:56:5c:14:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:32.175254 containerd[1474]: 2024-10-08 19:54:32.168 [INFO][5359] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008" Namespace="calico-apiserver" Pod="calico-apiserver-6986675545-7tvvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6986675545--7tvvv-eth0" Oct 8 19:54:32.203184 containerd[1474]: time="2024-10-08T19:54:32.202974971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:54:32.203184 containerd[1474]: time="2024-10-08T19:54:32.203096672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:54:32.203184 containerd[1474]: time="2024-10-08T19:54:32.203126519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:32.203897 containerd[1474]: time="2024-10-08T19:54:32.203274810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:32.230789 systemd[1]: Started cri-containerd-ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008.scope - libcontainer container ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008. Oct 8 19:54:32.247768 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:54:32.283684 containerd[1474]: time="2024-10-08T19:54:32.283420038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6986675545-7tvvv,Uid:05a055bd-9383-427c-bb93-fd25aaf3dc55,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008\"" Oct 8 19:54:32.286709 containerd[1474]: time="2024-10-08T19:54:32.286442855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:54:33.523742 systemd-networkd[1398]: cali8682bbd936f: Gained IPv6LL Oct 8 19:54:34.827902 containerd[1474]: time="2024-10-08T19:54:34.827815187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:34.828760 containerd[1474]: time="2024-10-08T19:54:34.828702428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 19:54:34.829942 containerd[1474]: time="2024-10-08T19:54:34.829871302Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:34.832152 containerd[1474]: time="2024-10-08T19:54:34.832118257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:34.832889 containerd[1474]: time="2024-10-08T19:54:34.832844443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.546362594s" Oct 8 19:54:34.832889 containerd[1474]: time="2024-10-08T19:54:34.832879199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 19:54:34.835218 containerd[1474]: time="2024-10-08T19:54:34.835166230Z" level=info msg="CreateContainer within sandbox \"ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:54:34.848184 containerd[1474]: time="2024-10-08T19:54:34.848124622Z" level=info msg="CreateContainer within sandbox \"ea53712be8c9050595884efed9848d3246a0d41a8526470a4d5676c53a68a008\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5c2fb8d35c94eee3b4d4a738e140f4d53b95e106c780fd56b5adda33a272ed41\"" Oct 8 19:54:34.849151 containerd[1474]: time="2024-10-08T19:54:34.849098647Z" level=info msg="StartContainer for \"5c2fb8d35c94eee3b4d4a738e140f4d53b95e106c780fd56b5adda33a272ed41\"" Oct 8 19:54:34.888687 systemd[1]: Started cri-containerd-5c2fb8d35c94eee3b4d4a738e140f4d53b95e106c780fd56b5adda33a272ed41.scope - libcontainer container 5c2fb8d35c94eee3b4d4a738e140f4d53b95e106c780fd56b5adda33a272ed41. Oct 8 19:54:34.935022 containerd[1474]: time="2024-10-08T19:54:34.934954130Z" level=info msg="StartContainer for \"5c2fb8d35c94eee3b4d4a738e140f4d53b95e106c780fd56b5adda33a272ed41\" returns successfully" Oct 8 19:54:35.259852 kubelet[2636]: I1008 19:54:35.258602 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6986675545-7tvvv" podStartSLOduration=2.710603592 podStartE2EDuration="5.258580724s" podCreationTimestamp="2024-10-08 19:54:30 +0000 UTC" firstStartedPulling="2024-10-08 19:54:32.285731828 +0000 UTC m=+79.499356758" lastFinishedPulling="2024-10-08 19:54:34.83370896 +0000 UTC m=+82.047333890" observedRunningTime="2024-10-08 19:54:35.256985223 +0000 UTC m=+82.470610153" watchObservedRunningTime="2024-10-08 19:54:35.258580724 +0000 UTC m=+82.472205654" Oct 8 19:54:36.139010 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:33728.service - OpenSSH per-connection server daemon (10.0.0.1:33728). Oct 8 19:54:36.175319 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 33728 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:36.177061 sshd[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:36.181184 systemd-logind[1458]: New session 23 of user core. Oct 8 19:54:36.191696 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:54:36.361691 sshd[5482]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:36.367505 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:33728.service: Deactivated successfully. Oct 8 19:54:36.369798 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:54:36.370607 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:54:36.371923 systemd-logind[1458]: Removed session 23. Oct 8 19:54:41.376993 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:35994.service - OpenSSH per-connection server daemon (10.0.0.1:35994). Oct 8 19:54:41.412045 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 35994 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:41.414065 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:41.418479 systemd-logind[1458]: New session 24 of user core. Oct 8 19:54:41.426675 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:54:41.548135 sshd[5513]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:41.552317 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:35994.service: Deactivated successfully. Oct 8 19:54:41.554707 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:54:41.555455 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:54:41.556673 systemd-logind[1458]: Removed session 24. Oct 8 19:54:46.578141 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:36000.service - OpenSSH per-connection server daemon (10.0.0.1:36000). Oct 8 19:54:46.633874 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 36000 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:46.636619 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:46.643272 systemd-logind[1458]: New session 25 of user core. Oct 8 19:54:46.649957 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:54:46.838181 sshd[5560]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:46.844324 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:36000.service: Deactivated successfully. Oct 8 19:54:46.847412 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:54:46.848281 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:54:46.849844 systemd-logind[1458]: Removed session 25. Oct 8 19:54:49.875434 kubelet[2636]: E1008 19:54:49.875367 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:51.861886 systemd[1]: Started sshd@25-10.0.0.35:22-10.0.0.1:50164.service - OpenSSH per-connection server daemon (10.0.0.1:50164). Oct 8 19:54:51.875442 kubelet[2636]: E1008 19:54:51.875407 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:51.889075 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 50164 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:51.891019 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:51.895284 systemd-logind[1458]: New session 26 of user core. Oct 8 19:54:51.901673 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:54:52.013261 sshd[5599]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:52.017216 systemd[1]: sshd@25-10.0.0.35:22-10.0.0.1:50164.service: Deactivated successfully. Oct 8 19:54:52.019466 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:54:52.020252 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:54:52.021177 systemd-logind[1458]: Removed session 26.