Oct 9 01:06:00.890036 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 01:06:00.890057 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:06:00.890068 kernel: BIOS-provided physical RAM map: Oct 9 01:06:00.890075 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 01:06:00.890081 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 01:06:00.890087 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 01:06:00.890094 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 9 01:06:00.890101 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 9 01:06:00.890107 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 01:06:00.890115 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 9 01:06:00.890126 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 01:06:00.890132 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 01:06:00.890138 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 9 01:06:00.890145 kernel: NX (Execute Disable) protection: active Oct 9 01:06:00.890152 kernel: APIC: Static calls initialized Oct 9 01:06:00.890161 kernel: SMBIOS 2.8 present. Oct 9 01:06:00.890171 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 9 01:06:00.890178 kernel: Hypervisor detected: KVM Oct 9 01:06:00.890184 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 01:06:00.890191 kernel: kvm-clock: using sched offset of 3002501142 cycles Oct 9 01:06:00.890198 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 01:06:00.890206 kernel: tsc: Detected 2794.748 MHz processor Oct 9 01:06:00.890213 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 01:06:00.890220 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 01:06:00.890227 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 9 01:06:00.890236 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 01:06:00.890243 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 01:06:00.890250 kernel: Using GB pages for direct mapping Oct 9 01:06:00.890258 kernel: ACPI: Early table checksum verification disabled Oct 9 01:06:00.890265 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 9 01:06:00.890272 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:00.890279 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:00.890286 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:00.890295 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 9 01:06:00.890302 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:00.890309 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:00.890316 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:00.890323 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:00.890330 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Oct 9 01:06:00.890337 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Oct 9 01:06:00.890350 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 9 01:06:00.890360 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Oct 9 01:06:00.890367 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Oct 9 01:06:00.890374 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Oct 9 01:06:00.890381 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Oct 9 01:06:00.890388 kernel: No NUMA configuration found Oct 9 01:06:00.890395 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 9 01:06:00.890402 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 9 01:06:00.890412 kernel: Zone ranges: Oct 9 01:06:00.890420 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 01:06:00.890427 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 9 01:06:00.890434 kernel: Normal empty Oct 9 01:06:00.890441 kernel: Movable zone start for each node Oct 9 01:06:00.890461 kernel: Early memory node ranges Oct 9 01:06:00.890468 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 01:06:00.890476 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 9 01:06:00.890483 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 9 01:06:00.890502 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 01:06:00.890510 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 01:06:00.890517 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 9 01:06:00.890525 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 01:06:00.890532 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 01:06:00.890539 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 01:06:00.890546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 01:06:00.890553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 01:06:00.890560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 01:06:00.890570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 01:06:00.890577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 01:06:00.890584 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 01:06:00.890591 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 01:06:00.890599 kernel: TSC deadline timer available Oct 9 01:06:00.890606 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 01:06:00.890614 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 01:06:00.890623 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 01:06:00.890633 kernel: kvm-guest: setup PV sched yield Oct 9 01:06:00.890650 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 9 01:06:00.890660 kernel: Booting paravirtualized kernel on KVM Oct 9 01:06:00.890671 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 01:06:00.890681 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 01:06:00.890689 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 01:06:00.890696 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 01:06:00.890703 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 01:06:00.890710 kernel: kvm-guest: PV spinlocks enabled Oct 9 01:06:00.890717 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 01:06:00.890729 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:06:00.890737 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:06:00.890744 kernel: random: crng init done Oct 9 01:06:00.890751 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 01:06:00.890759 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:06:00.890766 kernel: Fallback order for Node 0: 0 Oct 9 01:06:00.890773 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 9 01:06:00.890780 kernel: Policy zone: DMA32 Oct 9 01:06:00.890790 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:06:00.890805 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 136900K reserved, 0K cma-reserved) Oct 9 01:06:00.890819 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 01:06:00.890829 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 01:06:00.890843 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 01:06:00.890856 kernel: Dynamic Preempt: voluntary Oct 9 01:06:00.890870 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:06:00.890884 kernel: rcu: RCU event tracing is enabled. Oct 9 01:06:00.890897 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 01:06:00.890914 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:06:00.890927 kernel: Rude variant of Tasks RCU enabled. Oct 9 01:06:00.890941 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:06:00.890956 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:06:00.890970 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 01:06:00.890983 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 01:06:00.890991 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:06:00.890998 kernel: Console: colour VGA+ 80x25 Oct 9 01:06:00.891005 kernel: printk: console [ttyS0] enabled Oct 9 01:06:00.891012 kernel: ACPI: Core revision 20230628 Oct 9 01:06:00.891022 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 01:06:00.891029 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 01:06:00.891036 kernel: x2apic enabled Oct 9 01:06:00.891043 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 01:06:00.891051 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 01:06:00.891058 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 01:06:00.891065 kernel: kvm-guest: setup PV IPIs Oct 9 01:06:00.891082 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 01:06:00.891089 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 01:06:00.891097 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 9 01:06:00.891104 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 01:06:00.891114 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 01:06:00.891122 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 01:06:00.891129 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 01:06:00.891137 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 01:06:00.891144 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 01:06:00.891154 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 01:06:00.891162 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 01:06:00.891169 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 01:06:00.891179 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 01:06:00.891187 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 01:06:00.891195 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 01:06:00.891203 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 01:06:00.891211 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 01:06:00.891221 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 01:06:00.891228 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 01:06:00.891236 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 01:06:00.891244 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 01:06:00.891251 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 01:06:00.891259 kernel: Freeing SMP alternatives memory: 32K Oct 9 01:06:00.891276 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:06:00.891285 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:06:00.891292 kernel: landlock: Up and running. Oct 9 01:06:00.891313 kernel: SELinux: Initializing. Oct 9 01:06:00.891321 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:06:00.891329 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:06:00.891336 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 01:06:00.891344 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:06:00.891352 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:06:00.891361 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:06:00.891369 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 01:06:00.891376 kernel: ... version: 0 Oct 9 01:06:00.891387 kernel: ... bit width: 48 Oct 9 01:06:00.891394 kernel: ... generic registers: 6 Oct 9 01:06:00.891402 kernel: ... value mask: 0000ffffffffffff Oct 9 01:06:00.891409 kernel: ... max period: 00007fffffffffff Oct 9 01:06:00.891417 kernel: ... fixed-purpose events: 0 Oct 9 01:06:00.891424 kernel: ... event mask: 000000000000003f Oct 9 01:06:00.891432 kernel: signal: max sigframe size: 1776 Oct 9 01:06:00.891439 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:06:00.891460 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:06:00.891471 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:06:00.891478 kernel: smpboot: x86: Booting SMP configuration: Oct 9 01:06:00.891485 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 01:06:00.891493 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 01:06:00.891507 kernel: smpboot: Max logical packages: 1 Oct 9 01:06:00.891514 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 9 01:06:00.891522 kernel: devtmpfs: initialized Oct 9 01:06:00.891529 kernel: x86/mm: Memory block size: 128MB Oct 9 01:06:00.891537 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:06:00.891544 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 01:06:00.891554 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:06:00.891562 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:06:00.891569 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:06:00.891577 kernel: audit: type=2000 audit(1728435960.818:1): state=initialized audit_enabled=0 res=1 Oct 9 01:06:00.891584 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:06:00.891592 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 01:06:00.891599 kernel: cpuidle: using governor menu Oct 9 01:06:00.891607 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:06:00.891614 kernel: dca service started, version 1.12.1 Oct 9 01:06:00.891624 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 01:06:00.891632 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 01:06:00.891639 kernel: PCI: Using configuration type 1 for base access Oct 9 01:06:00.891646 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 01:06:00.891654 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:06:00.891661 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:06:00.891669 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:06:00.891677 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:06:00.891686 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:06:00.891694 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:06:00.891701 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:06:00.891709 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:06:00.891716 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:06:00.891724 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 01:06:00.891731 kernel: ACPI: Interpreter enabled Oct 9 01:06:00.891738 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 01:06:00.891746 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 01:06:00.891753 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 01:06:00.891763 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 01:06:00.891771 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 01:06:00.891778 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:06:00.892004 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:06:00.892168 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 01:06:00.892324 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 01:06:00.892335 kernel: PCI host bridge to bus 0000:00 Oct 9 01:06:00.892507 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 01:06:00.892632 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 01:06:00.892824 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 01:06:00.892967 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 01:06:00.893086 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 01:06:00.893208 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 9 01:06:00.893326 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:06:00.893604 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 01:06:00.893754 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 01:06:00.893961 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 9 01:06:00.894175 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 9 01:06:00.894339 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 9 01:06:00.894574 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 01:06:00.894754 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 01:06:00.894887 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 9 01:06:00.895014 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 9 01:06:00.895140 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 9 01:06:00.895286 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 01:06:00.895415 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 01:06:00.895568 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 9 01:06:00.895706 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 9 01:06:00.895850 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 01:06:00.895978 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 9 01:06:00.896104 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 9 01:06:00.896231 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 9 01:06:00.896357 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 9 01:06:00.896556 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 01:06:00.896689 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 01:06:00.896845 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 01:06:00.896972 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 9 01:06:00.897095 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 9 01:06:00.897233 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 01:06:00.897360 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 9 01:06:00.897375 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 01:06:00.897383 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 01:06:00.897391 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 01:06:00.897398 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 01:06:00.897406 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 01:06:00.897413 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 01:06:00.897421 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 01:06:00.897428 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 01:06:00.897436 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 01:06:00.897460 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 01:06:00.897467 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 01:06:00.897475 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 01:06:00.897482 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 01:06:00.897490 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 01:06:00.897505 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 01:06:00.897513 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 01:06:00.897520 kernel: iommu: Default domain type: Translated Oct 9 01:06:00.897528 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 01:06:00.897539 kernel: PCI: Using ACPI for IRQ routing Oct 9 01:06:00.897546 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 01:06:00.897554 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 01:06:00.897561 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 9 01:06:00.897707 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 01:06:00.897837 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 01:06:00.897963 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 01:06:00.897973 kernel: vgaarb: loaded Oct 9 01:06:00.897981 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 01:06:00.897993 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 01:06:00.898001 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 01:06:00.898008 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:06:00.898016 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:06:00.898024 kernel: pnp: PnP ACPI init Oct 9 01:06:00.898172 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 01:06:00.898184 kernel: pnp: PnP ACPI: found 6 devices Oct 9 01:06:00.898192 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 01:06:00.898203 kernel: NET: Registered PF_INET protocol family Oct 9 01:06:00.898211 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 01:06:00.898219 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 01:06:00.898227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:06:00.898235 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:06:00.898242 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 01:06:00.898250 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 01:06:00.898258 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:06:00.898265 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:06:00.898275 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:06:00.898283 kernel: NET: Registered PF_XDP protocol family Oct 9 01:06:00.898401 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 01:06:00.898575 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 01:06:00.898714 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 01:06:00.898833 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 01:06:00.898949 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 01:06:00.899065 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 9 01:06:00.899080 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:06:00.899088 kernel: Initialise system trusted keyrings Oct 9 01:06:00.899095 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 01:06:00.899103 kernel: Key type asymmetric registered Oct 9 01:06:00.899110 kernel: Asymmetric key parser 'x509' registered Oct 9 01:06:00.899118 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 01:06:00.899126 kernel: io scheduler mq-deadline registered Oct 9 01:06:00.899134 kernel: io scheduler kyber registered Oct 9 01:06:00.899141 kernel: io scheduler bfq registered Oct 9 01:06:00.899151 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 01:06:00.899159 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 01:06:00.899167 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 01:06:00.899175 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 01:06:00.899183 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:06:00.899190 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 01:06:00.899198 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 01:06:00.899206 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 01:06:00.899213 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 01:06:00.899356 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 01:06:00.899368 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 01:06:00.899520 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 01:06:00.899645 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T01:06:00 UTC (1728435960) Oct 9 01:06:00.899764 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 01:06:00.899774 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 01:06:00.899781 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:06:00.899789 kernel: Segment Routing with IPv6 Oct 9 01:06:00.899801 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:06:00.899808 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:06:00.899816 kernel: Key type dns_resolver registered Oct 9 01:06:00.899823 kernel: IPI shorthand broadcast: enabled Oct 9 01:06:00.899831 kernel: sched_clock: Marking stable (692002993, 107954325)->(826946239, -26988921) Oct 9 01:06:00.899839 kernel: registered taskstats version 1 Oct 9 01:06:00.899846 kernel: Loading compiled-in X.509 certificates Oct 9 01:06:00.899854 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 01:06:00.899862 kernel: Key type .fscrypt registered Oct 9 01:06:00.899872 kernel: Key type fscrypt-provisioning registered Oct 9 01:06:00.899879 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:06:00.899887 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:06:00.899894 kernel: ima: No architecture policies found Oct 9 01:06:00.899902 kernel: clk: Disabling unused clocks Oct 9 01:06:00.899909 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 01:06:00.899917 kernel: Write protecting the kernel read-only data: 36864k Oct 9 01:06:00.899925 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 01:06:00.899932 kernel: Run /init as init process Oct 9 01:06:00.899942 kernel: with arguments: Oct 9 01:06:00.899950 kernel: /init Oct 9 01:06:00.899957 kernel: with environment: Oct 9 01:06:00.899964 kernel: HOME=/ Oct 9 01:06:00.899972 kernel: TERM=linux Oct 9 01:06:00.899979 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:06:00.899989 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:06:00.899999 systemd[1]: Detected virtualization kvm. Oct 9 01:06:00.900009 systemd[1]: Detected architecture x86-64. Oct 9 01:06:00.900017 systemd[1]: Running in initrd. Oct 9 01:06:00.900025 systemd[1]: No hostname configured, using default hostname. Oct 9 01:06:00.900033 systemd[1]: Hostname set to . Oct 9 01:06:00.900041 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:06:00.900049 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:06:00.900057 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:06:00.900066 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:06:00.900078 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:06:00.900098 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:06:00.900108 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:06:00.900117 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:06:00.900127 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:06:00.900140 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:06:00.900148 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:06:00.900157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:06:00.900165 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:06:00.900173 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:06:00.900182 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:06:00.900190 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:06:00.900198 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:06:00.900209 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:06:00.900217 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:06:00.900225 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:06:00.900234 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:06:00.900242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:06:00.900250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:06:00.900259 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:06:00.900267 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:06:00.900277 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:06:00.900286 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:06:00.900294 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:06:00.900302 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:06:00.900310 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:06:00.900319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:06:00.900327 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:06:00.900335 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:06:00.900343 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:06:00.900354 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:06:00.900382 systemd-journald[192]: Collecting audit messages is disabled. Oct 9 01:06:00.900404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:06:00.900412 systemd-journald[192]: Journal started Oct 9 01:06:00.900432 systemd-journald[192]: Runtime Journal (/run/log/journal/e07004f3ec00410a8b3d55641b4f3738) is 6.0M, max 48.4M, 42.3M free. Oct 9 01:06:00.891351 systemd-modules-load[193]: Inserted module 'overlay' Oct 9 01:06:00.930389 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:06:00.930410 kernel: Bridge firewalling registered Oct 9 01:06:00.930421 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:06:00.918699 systemd-modules-load[193]: Inserted module 'br_netfilter' Oct 9 01:06:00.932257 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:06:00.942711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:06:00.944961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:06:00.945739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:06:00.947696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:00.949761 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:06:00.956283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:06:00.963994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:06:00.976675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:06:00.979612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:06:00.982765 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:06:00.987531 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:06:01.006411 dracut-cmdline[230]: dracut-dracut-053 Oct 9 01:06:01.010661 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:06:01.014182 systemd-resolved[222]: Positive Trust Anchors: Oct 9 01:06:01.014195 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:06:01.014227 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:06:01.016803 systemd-resolved[222]: Defaulting to hostname 'linux'. Oct 9 01:06:01.018031 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:06:01.025250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:06:01.114483 kernel: SCSI subsystem initialized Oct 9 01:06:01.124475 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:06:01.134475 kernel: iscsi: registered transport (tcp) Oct 9 01:06:01.155772 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:06:01.155810 kernel: QLogic iSCSI HBA Driver Oct 9 01:06:01.211284 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:06:01.222631 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:06:01.249363 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:06:01.249436 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:06:01.249475 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:06:01.292474 kernel: raid6: avx2x4 gen() 30281 MB/s Oct 9 01:06:01.309467 kernel: raid6: avx2x2 gen() 30751 MB/s Oct 9 01:06:01.326559 kernel: raid6: avx2x1 gen() 26059 MB/s Oct 9 01:06:01.326573 kernel: raid6: using algorithm avx2x2 gen() 30751 MB/s Oct 9 01:06:01.344598 kernel: raid6: .... xor() 19880 MB/s, rmw enabled Oct 9 01:06:01.344623 kernel: raid6: using avx2x2 recovery algorithm Oct 9 01:06:01.364475 kernel: xor: automatically using best checksumming function avx Oct 9 01:06:01.518493 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:06:01.533407 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:06:01.547666 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:06:01.560325 systemd-udevd[413]: Using default interface naming scheme 'v255'. Oct 9 01:06:01.564701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:06:01.566781 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:06:01.586779 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Oct 9 01:06:01.624771 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:06:01.636598 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:06:01.702352 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:06:01.713653 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:06:01.725005 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:06:01.730863 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:06:01.734027 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:06:01.736503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:06:01.740173 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 01:06:01.742853 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 01:06:01.751369 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 01:06:01.751387 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:06:01.748706 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:06:01.754666 kernel: GPT:9289727 != 19775487 Oct 9 01:06:01.754681 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:06:01.754692 kernel: GPT:9289727 != 19775487 Oct 9 01:06:01.754702 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:06:01.754712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:06:01.764050 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:06:01.770564 kernel: libata version 3.00 loaded. Oct 9 01:06:01.772491 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 01:06:01.774515 kernel: AES CTR mode by8 optimization enabled Oct 9 01:06:01.775567 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:06:01.775759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:06:01.783569 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:06:01.787117 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:06:01.794669 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) Oct 9 01:06:01.794690 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (470) Oct 9 01:06:01.787272 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:01.796736 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 01:06:01.797021 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 01:06:01.789171 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:06:01.798716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:06:01.803300 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 01:06:01.803524 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 01:06:01.803670 kernel: scsi host0: ahci Oct 9 01:06:01.803897 kernel: scsi host1: ahci Oct 9 01:06:01.806484 kernel: scsi host2: ahci Oct 9 01:06:01.809498 kernel: scsi host3: ahci Oct 9 01:06:01.811472 kernel: scsi host4: ahci Oct 9 01:06:01.811681 kernel: scsi host5: ahci Oct 9 01:06:01.811837 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 9 01:06:01.813120 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 9 01:06:01.813354 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 01:06:01.819512 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 9 01:06:01.819535 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 9 01:06:01.819545 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 9 01:06:01.819559 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 9 01:06:01.829950 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 01:06:01.866810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:01.878649 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 01:06:01.881367 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 01:06:01.889848 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:06:01.909634 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:06:01.912320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:06:01.920098 disk-uuid[555]: Primary Header is updated. Oct 9 01:06:01.920098 disk-uuid[555]: Secondary Entries is updated. Oct 9 01:06:01.920098 disk-uuid[555]: Secondary Header is updated. Oct 9 01:06:01.924480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:06:01.930487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:06:01.936168 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:06:02.127357 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 01:06:02.127440 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 01:06:02.127483 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 01:06:02.127499 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 01:06:02.127513 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 01:06:02.128485 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 01:06:02.129487 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 01:06:02.130602 kernel: ata3.00: applying bridge limits Oct 9 01:06:02.130614 kernel: ata3.00: configured for UDMA/100 Oct 9 01:06:02.131484 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 01:06:02.177492 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 01:06:02.177724 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 01:06:02.191481 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 01:06:02.930494 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:06:02.931065 disk-uuid[557]: The operation has completed successfully. Oct 9 01:06:02.985770 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:06:02.985902 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:06:02.997617 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:06:03.000878 sh[592]: Success Oct 9 01:06:03.014485 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 01:06:03.046688 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:06:03.062299 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:06:03.066777 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:06:03.083478 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 01:06:03.083509 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:06:03.083524 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:06:03.084627 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:06:03.086129 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:06:03.090115 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:06:03.090850 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:06:03.095600 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:06:03.097849 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:06:03.108572 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:06:03.108607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:06:03.108618 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:06:03.112473 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:06:03.121889 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:06:03.123630 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:06:03.133974 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:06:03.142622 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:06:03.246729 ignition[684]: Ignition 2.19.0 Oct 9 01:06:03.246742 ignition[684]: Stage: fetch-offline Oct 9 01:06:03.246791 ignition[684]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:03.246803 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:03.246929 ignition[684]: parsed url from cmdline: "" Oct 9 01:06:03.246933 ignition[684]: no config URL provided Oct 9 01:06:03.246938 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:06:03.246949 ignition[684]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:06:03.246987 ignition[684]: op(1): [started] loading QEMU firmware config module Oct 9 01:06:03.246993 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 01:06:03.258086 ignition[684]: op(1): [finished] loading QEMU firmware config module Oct 9 01:06:03.258121 ignition[684]: QEMU firmware config was not found. Ignoring... Oct 9 01:06:03.269896 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:06:03.286642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:06:03.302386 ignition[684]: parsing config with SHA512: fc510dcd63101b1e0633c6c46784d850500305d93cef21c0b48ef331ec45ef32b7a91d7c6cf6faf4b70e9ac363a97dbcfa33a90425e81a953d2215588d29ad5f Oct 9 01:06:03.307401 unknown[684]: fetched base config from "system" Oct 9 01:06:03.307445 unknown[684]: fetched user config from "qemu" Oct 9 01:06:03.307864 ignition[684]: fetch-offline: fetch-offline passed Oct 9 01:06:03.307967 ignition[684]: Ignition finished successfully Oct 9 01:06:03.309763 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:06:03.314008 systemd-networkd[780]: lo: Link UP Oct 9 01:06:03.314018 systemd-networkd[780]: lo: Gained carrier Oct 9 01:06:03.315935 systemd-networkd[780]: Enumeration completed Oct 9 01:06:03.316508 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:06:03.316513 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:06:03.319083 systemd-networkd[780]: eth0: Link UP Oct 9 01:06:03.319088 systemd-networkd[780]: eth0: Gained carrier Oct 9 01:06:03.319096 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:06:03.319210 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:06:03.320514 systemd[1]: Reached target network.target - Network. Oct 9 01:06:03.322553 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 01:06:03.332675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:06:03.339546 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:06:03.365552 ignition[783]: Ignition 2.19.0 Oct 9 01:06:03.365564 ignition[783]: Stage: kargs Oct 9 01:06:03.365733 ignition[783]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:03.365746 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:03.366640 ignition[783]: kargs: kargs passed Oct 9 01:06:03.366690 ignition[783]: Ignition finished successfully Oct 9 01:06:03.369900 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:06:03.381591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:06:03.396433 ignition[792]: Ignition 2.19.0 Oct 9 01:06:03.396476 ignition[792]: Stage: disks Oct 9 01:06:03.396656 ignition[792]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:03.396669 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:03.399564 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:06:03.397400 ignition[792]: disks: disks passed Oct 9 01:06:03.402024 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:06:03.397480 ignition[792]: Ignition finished successfully Oct 9 01:06:03.404253 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:06:03.405782 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:06:03.407601 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:06:03.407659 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:06:03.420583 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:06:03.433564 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 01:06:03.440089 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:06:03.442728 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:06:03.534471 kernel: EXT4-fs (vda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 01:06:03.534594 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:06:03.536175 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:06:03.548541 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:06:03.550375 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:06:03.551644 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 01:06:03.556813 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Oct 9 01:06:03.556834 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:06:03.551683 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:06:03.563269 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:06:03.563286 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:06:03.563296 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:06:03.551705 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:06:03.560334 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:06:03.564439 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:06:03.567325 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:06:03.604114 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:06:03.609380 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:06:03.614489 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:06:03.619043 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:06:03.705556 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:06:03.713647 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:06:03.717039 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:06:03.722502 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:06:03.743359 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:06:03.754130 ignition[924]: INFO : Ignition 2.19.0 Oct 9 01:06:03.754130 ignition[924]: INFO : Stage: mount Oct 9 01:06:03.755759 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:03.755759 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:03.758464 ignition[924]: INFO : mount: mount passed Oct 9 01:06:03.759215 ignition[924]: INFO : Ignition finished successfully Oct 9 01:06:03.762187 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:06:03.776568 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:06:04.082811 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:06:04.099621 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:06:04.110481 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Oct 9 01:06:04.110561 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:06:04.110574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:06:04.111926 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:06:04.114482 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:06:04.116143 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:06:04.156898 ignition[955]: INFO : Ignition 2.19.0 Oct 9 01:06:04.156898 ignition[955]: INFO : Stage: files Oct 9 01:06:04.158913 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:04.158913 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:04.158913 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:06:04.163197 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:06:04.163197 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:06:04.169930 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:06:04.171654 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:06:04.173098 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:06:04.172278 unknown[955]: wrote ssh authorized keys file for user: core Oct 9 01:06:04.175947 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:06:04.175947 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 01:06:04.240397 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 01:06:04.390733 systemd-networkd[780]: eth0: Gained IPv6LL Oct 9 01:06:04.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:06:04.547846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:06:04.551790 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:06:04.553482 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:06:04.555280 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:06:04.557035 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:06:04.558892 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:06:04.560592 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:06:04.562340 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:06:04.564460 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:06:04.566300 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:06:04.568084 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:06:04.570647 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:06:04.570647 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:06:04.575129 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 9 01:06:05.032325 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 01:06:05.491425 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:06:05.491425 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 01:06:05.495282 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:06:05.497534 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:06:05.497534 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 01:06:05.497534 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 9 01:06:05.501849 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:06:05.503774 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:06:05.503774 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 9 01:06:05.506912 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 01:06:05.534627 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:06:05.541326 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:06:05.542929 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 01:06:05.542929 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:06:05.542929 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:06:05.542929 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:06:05.542929 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:06:05.542929 ignition[955]: INFO : files: files passed Oct 9 01:06:05.542929 ignition[955]: INFO : Ignition finished successfully Oct 9 01:06:05.552589 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:06:05.566808 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:06:05.569851 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:06:05.572877 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:06:05.573065 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:06:05.581616 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 01:06:05.584670 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:06:05.584670 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:06:05.589020 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:06:05.587713 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:06:05.589211 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:06:05.599681 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:06:05.626690 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:06:05.626843 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:06:05.628011 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:06:05.630165 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:06:05.633098 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:06:05.633931 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:06:05.653357 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:06:05.672733 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:06:05.681702 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:06:05.682980 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:06:05.685223 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:06:05.687267 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:06:05.687400 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:06:05.689767 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:06:05.691327 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:06:05.693348 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:06:05.695417 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:06:05.697462 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:06:05.699615 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:06:05.701763 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:06:05.704081 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:06:05.706041 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:06:05.708243 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:06:05.710046 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:06:05.710164 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:06:05.712495 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:06:05.713945 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:06:05.716021 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:06:05.716163 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:06:05.718246 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:06:05.718357 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:06:05.720733 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:06:05.720874 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:06:05.722741 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:06:05.724542 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:06:05.728502 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:06:05.730220 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:06:05.732237 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:06:05.734036 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:06:05.734133 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:06:05.736111 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:06:05.736241 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:06:05.738557 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:06:05.738675 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:06:05.740608 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:06:05.740715 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:06:05.748599 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:06:05.751217 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:06:05.752265 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:06:05.752395 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:06:05.754653 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:06:05.754908 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:06:05.761105 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:06:05.761259 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:06:05.764622 ignition[1011]: INFO : Ignition 2.19.0 Oct 9 01:06:05.764622 ignition[1011]: INFO : Stage: umount Oct 9 01:06:05.764622 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:05.764622 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:05.764622 ignition[1011]: INFO : umount: umount passed Oct 9 01:06:05.764622 ignition[1011]: INFO : Ignition finished successfully Oct 9 01:06:05.765975 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:06:05.766115 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:06:05.767548 systemd[1]: Stopped target network.target - Network. Oct 9 01:06:05.769315 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:06:05.769373 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:06:05.771620 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:06:05.771671 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:06:05.773397 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:06:05.773458 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:06:05.775273 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:06:05.775337 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:06:05.777517 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:06:05.779705 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:06:05.782688 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:06:05.785131 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:06:05.785281 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:06:05.785500 systemd-networkd[780]: eth0: DHCPv6 lease lost Oct 9 01:06:05.788045 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:06:05.788226 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:06:05.791174 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:06:05.791301 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:06:05.798638 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:06:05.800152 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:06:05.800230 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:06:05.802934 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:06:05.802988 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:06:05.805566 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:06:05.805620 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:06:05.807097 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:06:05.807149 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:06:05.809518 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:06:05.822210 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:06:05.822350 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:06:05.836325 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:06:05.836549 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:06:05.839178 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:06:05.839232 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:06:05.841718 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:06:05.841767 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:06:05.844113 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:06:05.844168 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:06:05.846655 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:06:05.846705 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:06:05.849000 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:06:05.849050 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:06:05.861918 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:06:05.863237 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:06:05.863307 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:06:05.866189 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 01:06:05.866242 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:06:05.868970 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:06:05.869020 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:06:05.872009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:06:05.872060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:05.875186 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:06:05.875305 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:06:05.956919 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:06:05.957078 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:06:05.959629 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:06:05.961344 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:06:05.961474 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:06:05.972697 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:06:05.982680 systemd[1]: Switching root. Oct 9 01:06:06.018070 systemd-journald[192]: Journal stopped Oct 9 01:06:07.236146 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Oct 9 01:06:07.236217 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:06:07.236236 kernel: SELinux: policy capability open_perms=1 Oct 9 01:06:07.236248 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:06:07.236259 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:06:07.236271 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:06:07.236286 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:06:07.236298 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:06:07.236310 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:06:07.236322 kernel: audit: type=1403 audit(1728435966.367:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:06:07.236355 systemd[1]: Successfully loaded SELinux policy in 38.820ms. Oct 9 01:06:07.237124 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.490ms. Oct 9 01:06:07.237139 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:06:07.237152 systemd[1]: Detected virtualization kvm. Oct 9 01:06:07.237168 systemd[1]: Detected architecture x86-64. Oct 9 01:06:07.237180 systemd[1]: Detected first boot. Oct 9 01:06:07.237193 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:06:07.237205 zram_generator::config[1055]: No configuration found. Oct 9 01:06:07.237218 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:06:07.237231 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 01:06:07.237243 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 01:06:07.237256 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 01:06:07.237268 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:06:07.237284 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:06:07.237296 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:06:07.237308 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:06:07.237320 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:06:07.237333 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:06:07.237352 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:06:07.237370 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:06:07.237383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:06:07.237398 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:06:07.237415 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:06:07.237427 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:06:07.237439 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:06:07.237464 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:06:07.237478 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 01:06:07.237490 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:06:07.237502 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 01:06:07.237515 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 01:06:07.237530 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 01:06:07.237543 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:06:07.237556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:06:07.237570 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:06:07.237584 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:06:07.237598 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:06:07.237610 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:06:07.237623 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:06:07.237638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:06:07.237650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:06:07.237663 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:06:07.237675 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:06:07.237687 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:06:07.237703 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:06:07.237716 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:06:07.237728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:06:07.237741 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:06:07.237756 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:06:07.237768 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:06:07.237781 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:06:07.237793 systemd[1]: Reached target machines.target - Containers. Oct 9 01:06:07.237806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:06:07.237819 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:06:07.237831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:06:07.237844 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:06:07.237859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:06:07.237871 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:06:07.237883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:06:07.237896 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:06:07.237908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:06:07.237921 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:06:07.237933 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 01:06:07.237946 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 01:06:07.237958 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 01:06:07.237977 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 01:06:07.237990 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:06:07.238002 kernel: loop: module loaded Oct 9 01:06:07.238014 kernel: fuse: init (API version 7.39) Oct 9 01:06:07.238026 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:06:07.238039 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:06:07.238051 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:06:07.238064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:06:07.238096 systemd-journald[1122]: Collecting audit messages is disabled. Oct 9 01:06:07.238121 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 01:06:07.238133 systemd[1]: Stopped verity-setup.service. Oct 9 01:06:07.238146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:06:07.238159 systemd-journald[1122]: Journal started Oct 9 01:06:07.238180 systemd-journald[1122]: Runtime Journal (/run/log/journal/e07004f3ec00410a8b3d55641b4f3738) is 6.0M, max 48.4M, 42.3M free. Oct 9 01:06:07.007968 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:06:07.031916 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 01:06:07.032584 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 01:06:07.244543 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:06:07.245324 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:06:07.246648 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:06:07.248016 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:06:07.249263 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:06:07.250777 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:06:07.252472 kernel: ACPI: bus type drm_connector registered Oct 9 01:06:07.252980 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:06:07.254597 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:06:07.256360 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:06:07.256566 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:06:07.258165 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:06:07.259814 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:06:07.259996 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:06:07.261680 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:06:07.261867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:06:07.263307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:06:07.263534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:06:07.265174 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:06:07.265366 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:06:07.266814 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:06:07.266989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:06:07.268596 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:06:07.270029 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:06:07.271612 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:06:07.289253 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:06:07.301555 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:06:07.304150 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:06:07.305313 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:06:07.305360 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:06:07.307422 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:06:07.310695 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:06:07.314396 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:06:07.315704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:06:07.318608 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:06:07.322928 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:06:07.324892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:06:07.333390 systemd-journald[1122]: Time spent on flushing to /var/log/journal/e07004f3ec00410a8b3d55641b4f3738 is 24.198ms for 948 entries. Oct 9 01:06:07.333390 systemd-journald[1122]: System Journal (/var/log/journal/e07004f3ec00410a8b3d55641b4f3738) is 8.0M, max 195.6M, 187.6M free. Oct 9 01:06:07.370368 systemd-journald[1122]: Received client request to flush runtime journal. Oct 9 01:06:07.332289 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:06:07.334917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:06:07.337577 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:06:07.341311 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:06:07.348134 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:06:07.351104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:06:07.352595 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:06:07.354010 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:06:07.355549 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:06:07.357913 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:06:07.369976 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:06:07.376470 kernel: loop0: detected capacity change from 0 to 140992 Oct 9 01:06:07.375296 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:06:07.377834 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:06:07.380783 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:06:07.402935 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 01:06:07.435600 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Oct 9 01:06:07.435621 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Oct 9 01:06:07.439078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:06:07.443793 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:06:07.444543 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:06:07.446390 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:06:07.455558 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:06:07.461624 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:06:07.484550 kernel: loop1: detected capacity change from 0 to 138192 Oct 9 01:06:07.498507 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:06:07.513625 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:06:07.551976 kernel: loop2: detected capacity change from 0 to 210664 Oct 9 01:06:07.574212 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Oct 9 01:06:07.574641 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Oct 9 01:06:07.581344 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:06:07.587467 kernel: loop3: detected capacity change from 0 to 140992 Oct 9 01:06:07.598482 kernel: loop4: detected capacity change from 0 to 138192 Oct 9 01:06:07.609522 kernel: loop5: detected capacity change from 0 to 210664 Oct 9 01:06:07.614200 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 01:06:07.614833 (sd-merge)[1197]: Merged extensions into '/usr'. Oct 9 01:06:07.620149 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:06:07.620172 systemd[1]: Reloading... Oct 9 01:06:07.707543 zram_generator::config[1224]: No configuration found. Oct 9 01:06:07.826221 ldconfig[1164]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:06:07.867067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:07.918931 systemd[1]: Reloading finished in 298 ms. Oct 9 01:06:07.959139 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:06:07.960831 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:06:07.984706 systemd[1]: Starting ensure-sysext.service... Oct 9 01:06:07.987021 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:06:07.997715 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:06:07.997837 systemd[1]: Reloading... Oct 9 01:06:08.021855 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:06:08.022732 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:06:08.025817 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:06:08.026243 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 9 01:06:08.026441 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 9 01:06:08.030352 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:06:08.032510 systemd-tmpfiles[1261]: Skipping /boot Oct 9 01:06:08.045987 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:06:08.047629 systemd-tmpfiles[1261]: Skipping /boot Oct 9 01:06:08.052479 zram_generator::config[1291]: No configuration found. Oct 9 01:06:08.164003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:08.223941 systemd[1]: Reloading finished in 225 ms. Oct 9 01:06:08.250537 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:06:08.264035 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:06:08.273000 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:06:08.275911 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:06:08.278844 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:06:08.284639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:06:08.287745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:06:08.294131 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:06:08.299420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:06:08.299685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:06:08.301491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:06:08.304540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:06:08.307090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:06:08.308337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:06:08.311500 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:06:08.314508 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:06:08.316648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:06:08.316880 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:06:08.324175 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:06:08.324420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:06:08.330183 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Oct 9 01:06:08.330239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:06:08.331565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:06:08.331741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:06:08.333870 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:06:08.336755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:06:08.336946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:06:08.339556 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:06:08.339785 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:06:08.341998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:06:08.342194 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:06:08.346605 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:06:08.360219 augenrules[1361]: No rules Oct 9 01:06:08.361437 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:06:08.361709 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:06:08.364723 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:06:08.365047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:06:08.371845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:06:08.374272 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:06:08.377007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:06:08.381619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:06:08.383794 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:06:08.385503 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:06:08.388525 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:06:08.389396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:06:08.392292 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:06:08.395265 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:06:08.395532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:06:08.398042 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:06:08.398232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:06:08.400286 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:06:08.403147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:06:08.403384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:06:08.405106 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:06:08.405337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:06:08.408578 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:06:08.419561 systemd[1]: Finished ensure-sysext.service. Oct 9 01:06:08.435003 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 01:06:08.435475 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1386) Oct 9 01:06:08.451609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:06:08.452898 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:06:08.452973 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:06:08.458615 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:06:08.462146 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:06:08.571475 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1371) Oct 9 01:06:08.578905 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1371) Oct 9 01:06:08.607291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:06:08.616639 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:06:08.630078 systemd-resolved[1330]: Positive Trust Anchors: Oct 9 01:06:08.630098 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:06:08.630129 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:06:08.634223 systemd-resolved[1330]: Defaulting to hostname 'linux'. Oct 9 01:06:08.636305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:06:08.637674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:06:08.639457 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:06:08.642477 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 01:06:08.646794 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:06:08.648295 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:06:08.650496 kernel: ACPI: button: Power Button [PWRF] Oct 9 01:06:08.653956 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 01:06:08.654235 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 01:06:08.655839 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 01:06:08.657745 systemd-networkd[1404]: lo: Link UP Oct 9 01:06:08.657750 systemd-networkd[1404]: lo: Gained carrier Oct 9 01:06:08.664815 systemd-networkd[1404]: Enumeration completed Oct 9 01:06:08.665339 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:06:08.665343 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:06:08.666570 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 01:06:08.666783 systemd-networkd[1404]: eth0: Link UP Oct 9 01:06:08.666792 systemd-networkd[1404]: eth0: Gained carrier Oct 9 01:06:08.666804 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:06:08.667139 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:06:08.670583 systemd[1]: Reached target network.target - Network. Oct 9 01:06:08.679614 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:06:08.684541 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:06:08.686576 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Oct 9 01:06:09.680227 systemd-resolved[1330]: Clock change detected. Flushing caches. Oct 9 01:06:09.680347 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 01:06:09.680415 systemd-timesyncd[1406]: Initial clock synchronization to Wed 2024-10-09 01:06:09.680018 UTC. Oct 9 01:06:09.704100 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 01:06:09.706377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:06:09.798351 kernel: kvm_amd: TSC scaling supported Oct 9 01:06:09.798445 kernel: kvm_amd: Nested Virtualization enabled Oct 9 01:06:09.798459 kernel: kvm_amd: Nested Paging enabled Oct 9 01:06:09.799491 kernel: kvm_amd: LBR virtualization supported Oct 9 01:06:09.799524 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 01:06:09.800155 kernel: kvm_amd: Virtual GIF supported Oct 9 01:06:09.824093 kernel: EDAC MC: Ver: 3.0.0 Oct 9 01:06:09.864037 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:06:09.874272 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:06:09.875861 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:09.884363 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:06:09.922039 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:06:09.923964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:06:09.925382 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:06:09.926996 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:06:09.928407 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:06:09.930021 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:06:09.931357 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:06:09.932711 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:06:09.934045 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:06:09.934094 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:06:09.935085 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:06:09.937009 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:06:09.940552 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:06:09.950749 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:06:09.954508 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:06:09.956767 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:06:09.958352 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:06:09.959592 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:06:09.960844 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:06:09.960880 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:06:09.962610 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:06:09.965479 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:06:09.969561 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:06:09.970191 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:06:09.975259 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:06:09.977792 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:06:09.980853 jq[1439]: false Oct 9 01:06:09.983057 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:06:09.989792 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:06:09.993395 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:06:09.996422 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:06:10.000395 dbus-daemon[1438]: [system] SELinux support is enabled Oct 9 01:06:10.004495 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:06:10.006271 extend-filesystems[1440]: Found loop3 Oct 9 01:06:10.006271 extend-filesystems[1440]: Found loop4 Oct 9 01:06:10.006271 extend-filesystems[1440]: Found loop5 Oct 9 01:06:10.006271 extend-filesystems[1440]: Found sr0 Oct 9 01:06:10.006271 extend-filesystems[1440]: Found vda Oct 9 01:06:10.006271 extend-filesystems[1440]: Found vda1 Oct 9 01:06:10.006271 extend-filesystems[1440]: Found vda2 Oct 9 01:06:10.006271 extend-filesystems[1440]: Found vda3 Oct 9 01:06:10.006271 extend-filesystems[1440]: Found usr Oct 9 01:06:10.006271 extend-filesystems[1440]: Found vda4 Oct 9 01:06:10.021325 extend-filesystems[1440]: Found vda6 Oct 9 01:06:10.021325 extend-filesystems[1440]: Found vda7 Oct 9 01:06:10.021325 extend-filesystems[1440]: Found vda9 Oct 9 01:06:10.021325 extend-filesystems[1440]: Checking size of /dev/vda9 Oct 9 01:06:10.006383 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:06:10.028469 extend-filesystems[1440]: Resized partition /dev/vda9 Oct 9 01:06:10.007097 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:06:10.010578 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:06:10.014391 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:06:10.030751 jq[1456]: true Oct 9 01:06:10.017154 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:06:10.022165 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:06:10.027476 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:06:10.027773 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:06:10.028310 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:06:10.029036 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:06:10.036272 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1375) Oct 9 01:06:10.033226 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:06:10.033449 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:06:10.050618 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:06:10.056900 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 01:06:10.058967 jq[1463]: true Oct 9 01:06:10.060442 update_engine[1453]: I20241009 01:06:10.060102 1453 main.cc:92] Flatcar Update Engine starting Oct 9 01:06:10.067374 update_engine[1453]: I20241009 01:06:10.067322 1453 update_check_scheduler.cc:74] Next update check in 2m18s Oct 9 01:06:10.113865 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:06:10.122748 tar[1462]: linux-amd64/helm Oct 9 01:06:10.123952 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:06:10.125369 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:06:10.125397 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:06:10.126690 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:06:10.126711 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:06:10.130084 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 01:06:10.137270 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:06:10.157511 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 01:06:10.158021 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 01:06:10.159723 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 01:06:10.159723 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 01:06:10.159723 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 01:06:10.159496 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:06:10.229923 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:06:10.230092 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Oct 9 01:06:10.162232 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:06:10.165720 systemd-logind[1451]: New seat seat0. Oct 9 01:06:10.216208 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:06:10.221537 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:06:10.228890 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 01:06:10.230767 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:06:10.326859 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:06:10.352655 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:06:10.364240 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:06:10.373122 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:06:10.373431 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:06:10.381324 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:06:10.421428 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:06:10.432348 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:06:10.436025 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 01:06:10.437520 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:06:10.475296 containerd[1477]: time="2024-10-09T01:06:10.475192839Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:06:10.517359 containerd[1477]: time="2024-10-09T01:06:10.517143452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:10.519270 containerd[1477]: time="2024-10-09T01:06:10.519212382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.519371530Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.519411715Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.519723400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.519750220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.519863703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.519886406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.520213950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.520236151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.520252602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:10.522572 containerd[1477]: time="2024-10-09T01:06:10.520265607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:10.523047 containerd[1477]: time="2024-10-09T01:06:10.523001077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:10.523606 containerd[1477]: time="2024-10-09T01:06:10.523578630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:10.523915 containerd[1477]: time="2024-10-09T01:06:10.523877430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:10.523997 containerd[1477]: time="2024-10-09T01:06:10.523978410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:06:10.524216 containerd[1477]: time="2024-10-09T01:06:10.524191840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:06:10.524372 containerd[1477]: time="2024-10-09T01:06:10.524348163Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:06:10.530618 containerd[1477]: time="2024-10-09T01:06:10.530566174Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:06:10.530821 containerd[1477]: time="2024-10-09T01:06:10.530803178Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:06:10.530880 containerd[1477]: time="2024-10-09T01:06:10.530866347Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:06:10.530956 containerd[1477]: time="2024-10-09T01:06:10.530938973Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:06:10.531032 containerd[1477]: time="2024-10-09T01:06:10.531006961Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:06:10.531317 containerd[1477]: time="2024-10-09T01:06:10.531295742Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:06:10.531805 containerd[1477]: time="2024-10-09T01:06:10.531751226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:06:10.532042 containerd[1477]: time="2024-10-09T01:06:10.532016063Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:06:10.532042 containerd[1477]: time="2024-10-09T01:06:10.532037944Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:06:10.532111 containerd[1477]: time="2024-10-09T01:06:10.532055467Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:06:10.532111 containerd[1477]: time="2024-10-09T01:06:10.532090913Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:06:10.532111 containerd[1477]: time="2024-10-09T01:06:10.532104910Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:06:10.532164 containerd[1477]: time="2024-10-09T01:06:10.532118285Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:06:10.532164 containerd[1477]: time="2024-10-09T01:06:10.532135587Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:06:10.532164 containerd[1477]: time="2024-10-09T01:06:10.532152469Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:06:10.532227 containerd[1477]: time="2024-10-09T01:06:10.532165784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:06:10.532227 containerd[1477]: time="2024-10-09T01:06:10.532179099Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:06:10.532227 containerd[1477]: time="2024-10-09T01:06:10.532190991Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:06:10.532227 containerd[1477]: time="2024-10-09T01:06:10.532215898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532295 containerd[1477]: time="2024-10-09T01:06:10.532229243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532295 containerd[1477]: time="2024-10-09T01:06:10.532243900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532295 containerd[1477]: time="2024-10-09T01:06:10.532256374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532295 containerd[1477]: time="2024-10-09T01:06:10.532269568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532295 containerd[1477]: time="2024-10-09T01:06:10.532282182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532295 containerd[1477]: time="2024-10-09T01:06:10.532294245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532311798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532325213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532340532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532352504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532363655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532375607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532388612Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532408409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532422155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.532446 containerd[1477]: time="2024-10-09T01:06:10.532433045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:06:10.533397 containerd[1477]: time="2024-10-09T01:06:10.533362518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:06:10.534110 containerd[1477]: time="2024-10-09T01:06:10.533457556Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:06:10.534110 containerd[1477]: time="2024-10-09T01:06:10.533475630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:06:10.534110 containerd[1477]: time="2024-10-09T01:06:10.533491209Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:06:10.534110 containerd[1477]: time="2024-10-09T01:06:10.533503913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.534110 containerd[1477]: time="2024-10-09T01:06:10.533522378Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:06:10.534110 containerd[1477]: time="2024-10-09T01:06:10.533537055Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:06:10.534110 containerd[1477]: time="2024-10-09T01:06:10.533550000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:06:10.534318 containerd[1477]: time="2024-10-09T01:06:10.533951332Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:06:10.534318 containerd[1477]: time="2024-10-09T01:06:10.534013990Z" level=info msg="Connect containerd service" Oct 9 01:06:10.534776 containerd[1477]: time="2024-10-09T01:06:10.534744039Z" level=info msg="using legacy CRI server" Oct 9 01:06:10.534776 containerd[1477]: time="2024-10-09T01:06:10.534770969Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:06:10.535234 containerd[1477]: time="2024-10-09T01:06:10.535204152Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:06:10.537296 containerd[1477]: time="2024-10-09T01:06:10.537269034Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:06:10.537519 containerd[1477]: time="2024-10-09T01:06:10.537482585Z" level=info msg="Start subscribing containerd event" Oct 9 01:06:10.537612 containerd[1477]: time="2024-10-09T01:06:10.537597400Z" level=info msg="Start recovering state" Oct 9 01:06:10.537887 containerd[1477]: time="2024-10-09T01:06:10.537867987Z" level=info msg="Start event monitor" Oct 9 01:06:10.538407 containerd[1477]: time="2024-10-09T01:06:10.538386871Z" level=info msg="Start snapshots syncer" Oct 9 01:06:10.538496 containerd[1477]: time="2024-10-09T01:06:10.538480376Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:06:10.538551 containerd[1477]: time="2024-10-09T01:06:10.538538826Z" level=info msg="Start streaming server" Oct 9 01:06:10.538724 containerd[1477]: time="2024-10-09T01:06:10.538343179Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:06:10.538839 containerd[1477]: time="2024-10-09T01:06:10.538823710Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:06:10.539155 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:06:10.539426 containerd[1477]: time="2024-10-09T01:06:10.539327354Z" level=info msg="containerd successfully booted in 0.065956s" Oct 9 01:06:10.638553 tar[1462]: linux-amd64/LICENSE Oct 9 01:06:10.638732 tar[1462]: linux-amd64/README.md Oct 9 01:06:10.658696 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:06:11.014313 systemd-networkd[1404]: eth0: Gained IPv6LL Oct 9 01:06:11.018123 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:06:11.020206 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:06:11.030298 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 01:06:11.033046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:11.035578 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:06:11.062536 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 01:06:11.062920 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 01:06:11.065099 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:06:11.066287 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:06:12.015896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:12.017693 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:06:12.020227 systemd[1]: Startup finished in 827ms (kernel) + 5.670s (initrd) + 4.699s (userspace) = 11.197s. Oct 9 01:06:12.040595 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:06:12.693652 kubelet[1552]: E1009 01:06:12.693580 1552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:06:12.698411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:06:12.698660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:06:12.699123 systemd[1]: kubelet.service: Consumed 1.526s CPU time. Oct 9 01:06:19.732590 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:06:19.733982 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:42882.service - OpenSSH per-connection server daemon (10.0.0.1:42882). Oct 9 01:06:19.782279 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 42882 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:06:19.784338 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:19.793298 systemd-logind[1451]: New session 1 of user core. Oct 9 01:06:19.794598 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:06:19.803291 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:06:19.814970 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:06:19.825359 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:06:19.828367 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:06:19.928187 systemd[1570]: Queued start job for default target default.target. Oct 9 01:06:19.937389 systemd[1570]: Created slice app.slice - User Application Slice. Oct 9 01:06:19.937415 systemd[1570]: Reached target paths.target - Paths. Oct 9 01:06:19.937429 systemd[1570]: Reached target timers.target - Timers. Oct 9 01:06:19.938993 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:06:19.950904 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:06:19.951045 systemd[1570]: Reached target sockets.target - Sockets. Oct 9 01:06:19.951080 systemd[1570]: Reached target basic.target - Basic System. Oct 9 01:06:19.951121 systemd[1570]: Reached target default.target - Main User Target. Oct 9 01:06:19.951158 systemd[1570]: Startup finished in 116ms. Oct 9 01:06:19.951522 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:06:19.953348 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:06:20.022137 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:42898.service - OpenSSH per-connection server daemon (10.0.0.1:42898). Oct 9 01:06:20.059396 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 42898 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:06:20.061014 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:20.064856 systemd-logind[1451]: New session 2 of user core. Oct 9 01:06:20.075196 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:06:20.128573 sshd[1581]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:20.142679 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:42898.service: Deactivated successfully. Oct 9 01:06:20.144413 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:06:20.145719 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:06:20.146935 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:42904.service - OpenSSH per-connection server daemon (10.0.0.1:42904). Oct 9 01:06:20.147764 systemd-logind[1451]: Removed session 2. Oct 9 01:06:20.183900 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 42904 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:06:20.185421 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:20.189263 systemd-logind[1451]: New session 3 of user core. Oct 9 01:06:20.199195 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:06:20.249229 sshd[1588]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:20.260683 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:42904.service: Deactivated successfully. Oct 9 01:06:20.262385 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:06:20.263777 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:06:20.264985 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:42910.service - OpenSSH per-connection server daemon (10.0.0.1:42910). Oct 9 01:06:20.265788 systemd-logind[1451]: Removed session 3. Oct 9 01:06:20.301951 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 42910 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:06:20.303625 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:20.307612 systemd-logind[1451]: New session 4 of user core. Oct 9 01:06:20.321216 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:06:20.375394 sshd[1595]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:20.390634 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:42910.service: Deactivated successfully. Oct 9 01:06:20.392289 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:06:20.393616 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:06:20.409476 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:42912.service - OpenSSH per-connection server daemon (10.0.0.1:42912). Oct 9 01:06:20.410394 systemd-logind[1451]: Removed session 4. Oct 9 01:06:20.441620 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 42912 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:06:20.443167 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:20.446950 systemd-logind[1451]: New session 5 of user core. Oct 9 01:06:20.453214 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:06:20.510330 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:06:20.510675 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:20.528124 sudo[1606]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:20.530037 sshd[1602]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:20.544136 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:42912.service: Deactivated successfully. Oct 9 01:06:20.545941 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:06:20.547458 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:06:20.555413 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:42916.service - OpenSSH per-connection server daemon (10.0.0.1:42916). Oct 9 01:06:20.556608 systemd-logind[1451]: Removed session 5. Oct 9 01:06:20.587247 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 42916 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:06:20.588781 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:20.592509 systemd-logind[1451]: New session 6 of user core. Oct 9 01:06:20.608184 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:06:20.661517 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:06:20.661976 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:20.665848 sudo[1615]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:20.672984 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:06:20.673349 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:20.693347 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:06:20.722345 augenrules[1637]: No rules Oct 9 01:06:20.724122 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:06:20.724373 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:06:20.725504 sudo[1614]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:20.727287 sshd[1611]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:20.740543 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:42916.service: Deactivated successfully. Oct 9 01:06:20.742194 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:06:20.743596 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:06:20.744844 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:42926.service - OpenSSH per-connection server daemon (10.0.0.1:42926). Oct 9 01:06:20.745548 systemd-logind[1451]: Removed session 6. Oct 9 01:06:20.781858 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 42926 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:06:20.783474 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:20.786960 systemd-logind[1451]: New session 7 of user core. Oct 9 01:06:20.793208 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:06:20.846554 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:06:20.846904 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:21.113275 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:06:21.113413 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:06:21.350391 dockerd[1669]: time="2024-10-09T01:06:21.350319113Z" level=info msg="Starting up" Oct 9 01:06:21.423986 systemd[1]: var-lib-docker-metacopy\x2dcheck448793452-merged.mount: Deactivated successfully. Oct 9 01:06:21.447982 dockerd[1669]: time="2024-10-09T01:06:21.447937302Z" level=info msg="Loading containers: start." Oct 9 01:06:21.630093 kernel: Initializing XFRM netlink socket Oct 9 01:06:21.713312 systemd-networkd[1404]: docker0: Link UP Oct 9 01:06:21.744818 dockerd[1669]: time="2024-10-09T01:06:21.744767655Z" level=info msg="Loading containers: done." Oct 9 01:06:21.759594 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1027874871-merged.mount: Deactivated successfully. Oct 9 01:06:21.762584 dockerd[1669]: time="2024-10-09T01:06:21.762541988Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:06:21.762678 dockerd[1669]: time="2024-10-09T01:06:21.762641675Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:06:21.762776 dockerd[1669]: time="2024-10-09T01:06:21.762756039Z" level=info msg="Daemon has completed initialization" Oct 9 01:06:21.801543 dockerd[1669]: time="2024-10-09T01:06:21.801480932Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:06:21.801780 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:06:22.438121 containerd[1477]: time="2024-10-09T01:06:22.438048724Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 9 01:06:22.949105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:06:22.954214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:23.128631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:23.136146 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:06:23.186763 kubelet[1882]: E1009 01:06:23.186641 1882 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:06:23.193943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:06:23.194205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:06:23.315084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193486124.mount: Deactivated successfully. Oct 9 01:06:24.448481 containerd[1477]: time="2024-10-09T01:06:24.448412794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:24.449541 containerd[1477]: time="2024-10-09T01:06:24.449466250Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754097" Oct 9 01:06:24.451038 containerd[1477]: time="2024-10-09T01:06:24.451000868Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:24.454151 containerd[1477]: time="2024-10-09T01:06:24.454117633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:24.455149 containerd[1477]: time="2024-10-09T01:06:24.455092712Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 2.016959219s" Oct 9 01:06:24.455220 containerd[1477]: time="2024-10-09T01:06:24.455152544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 9 01:06:24.476692 containerd[1477]: time="2024-10-09T01:06:24.476659327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 9 01:06:26.561510 containerd[1477]: time="2024-10-09T01:06:26.561352320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:26.591668 containerd[1477]: time="2024-10-09T01:06:26.591569578Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591652" Oct 9 01:06:26.623616 containerd[1477]: time="2024-10-09T01:06:26.623471105Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:26.662872 containerd[1477]: time="2024-10-09T01:06:26.662283512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:26.663592 containerd[1477]: time="2024-10-09T01:06:26.663550218Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 2.186855424s" Oct 9 01:06:26.663651 containerd[1477]: time="2024-10-09T01:06:26.663593860Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 9 01:06:26.688750 containerd[1477]: time="2024-10-09T01:06:26.688666850Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 9 01:06:27.573292 containerd[1477]: time="2024-10-09T01:06:27.573239720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:27.574662 containerd[1477]: time="2024-10-09T01:06:27.574611172Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17779987" Oct 9 01:06:27.575889 containerd[1477]: time="2024-10-09T01:06:27.575851198Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:27.579246 containerd[1477]: time="2024-10-09T01:06:27.579190230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:27.580373 containerd[1477]: time="2024-10-09T01:06:27.580329015Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 891.603606ms" Oct 9 01:06:27.580373 containerd[1477]: time="2024-10-09T01:06:27.580366566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 9 01:06:27.604527 containerd[1477]: time="2024-10-09T01:06:27.604475519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 9 01:06:29.354862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891331855.mount: Deactivated successfully. Oct 9 01:06:29.833255 containerd[1477]: time="2024-10-09T01:06:29.833170998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:29.833911 containerd[1477]: time="2024-10-09T01:06:29.833871201Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039362" Oct 9 01:06:29.835183 containerd[1477]: time="2024-10-09T01:06:29.835158846Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:29.837158 containerd[1477]: time="2024-10-09T01:06:29.837132978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:29.837810 containerd[1477]: time="2024-10-09T01:06:29.837765615Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 2.233243158s" Oct 9 01:06:29.837856 containerd[1477]: time="2024-10-09T01:06:29.837810629Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 9 01:06:29.860591 containerd[1477]: time="2024-10-09T01:06:29.860520127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:06:30.437729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166515851.mount: Deactivated successfully. Oct 9 01:06:31.095263 containerd[1477]: time="2024-10-09T01:06:31.095192709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:31.095963 containerd[1477]: time="2024-10-09T01:06:31.095916146Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 01:06:31.097177 containerd[1477]: time="2024-10-09T01:06:31.097102841Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:31.100313 containerd[1477]: time="2024-10-09T01:06:31.100261575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:31.101545 containerd[1477]: time="2024-10-09T01:06:31.101502833Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.240945566s" Oct 9 01:06:31.101545 containerd[1477]: time="2024-10-09T01:06:31.101540934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 01:06:31.123237 containerd[1477]: time="2024-10-09T01:06:31.123177992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:06:31.985037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504206055.mount: Deactivated successfully. Oct 9 01:06:31.992309 containerd[1477]: time="2024-10-09T01:06:31.992257726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:31.993169 containerd[1477]: time="2024-10-09T01:06:31.993115645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 01:06:31.994278 containerd[1477]: time="2024-10-09T01:06:31.994236507Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:31.996767 containerd[1477]: time="2024-10-09T01:06:31.996729362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:31.997634 containerd[1477]: time="2024-10-09T01:06:31.997592251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 874.369054ms" Oct 9 01:06:31.997634 containerd[1477]: time="2024-10-09T01:06:31.997627116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 01:06:32.019321 containerd[1477]: time="2024-10-09T01:06:32.019279001Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 9 01:06:32.861936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645868084.mount: Deactivated successfully. Oct 9 01:06:33.417466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:06:33.431250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:33.612091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:33.617795 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:06:33.698990 kubelet[2067]: E1009 01:06:33.698730 2067 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:06:33.703838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:06:33.704279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:06:34.995703 containerd[1477]: time="2024-10-09T01:06:34.995618590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:34.996393 containerd[1477]: time="2024-10-09T01:06:34.996326969Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Oct 9 01:06:34.997672 containerd[1477]: time="2024-10-09T01:06:34.997635693Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:35.000507 containerd[1477]: time="2024-10-09T01:06:35.000458777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:35.001890 containerd[1477]: time="2024-10-09T01:06:35.001852852Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.982534176s" Oct 9 01:06:35.001934 containerd[1477]: time="2024-10-09T01:06:35.001890823Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 9 01:06:38.543452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:38.557317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:38.575680 systemd[1]: Reloading requested from client PID 2193 ('systemctl') (unit session-7.scope)... Oct 9 01:06:38.575719 systemd[1]: Reloading... Oct 9 01:06:38.669091 zram_generator::config[2233]: No configuration found. Oct 9 01:06:38.879694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:38.956171 systemd[1]: Reloading finished in 379 ms. Oct 9 01:06:39.012163 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 01:06:39.012272 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 01:06:39.012558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:39.014228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:39.169909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:39.193784 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:06:39.247011 kubelet[2280]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:39.247011 kubelet[2280]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:06:39.247011 kubelet[2280]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:39.247615 kubelet[2280]: I1009 01:06:39.247030 2280 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:06:39.794247 kubelet[2280]: I1009 01:06:39.794153 2280 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:06:39.794247 kubelet[2280]: I1009 01:06:39.794223 2280 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:06:39.794538 kubelet[2280]: I1009 01:06:39.794507 2280 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:06:39.814570 kubelet[2280]: I1009 01:06:39.814506 2280 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:06:39.814863 kubelet[2280]: E1009 01:06:39.814840 2280 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:39.828422 kubelet[2280]: I1009 01:06:39.828372 2280 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:06:39.828738 kubelet[2280]: I1009 01:06:39.828686 2280 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:06:39.828960 kubelet[2280]: I1009 01:06:39.828726 2280 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:06:39.829106 kubelet[2280]: I1009 01:06:39.828978 2280 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:06:39.829106 kubelet[2280]: I1009 01:06:39.828991 2280 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:06:39.829221 kubelet[2280]: I1009 01:06:39.829185 2280 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:39.829937 kubelet[2280]: I1009 01:06:39.829908 2280 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:06:39.829937 kubelet[2280]: I1009 01:06:39.829930 2280 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:06:39.830011 kubelet[2280]: I1009 01:06:39.829961 2280 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:06:39.830011 kubelet[2280]: I1009 01:06:39.829998 2280 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:06:39.832836 kubelet[2280]: W1009 01:06:39.832732 2280 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:39.832836 kubelet[2280]: E1009 01:06:39.832800 2280 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:39.833989 kubelet[2280]: W1009 01:06:39.833943 2280 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:39.833989 kubelet[2280]: E1009 01:06:39.833984 2280 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:39.836943 kubelet[2280]: I1009 01:06:39.836884 2280 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:06:39.838135 kubelet[2280]: I1009 01:06:39.838115 2280 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:06:39.838201 kubelet[2280]: W1009 01:06:39.838181 2280 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:06:39.839032 kubelet[2280]: I1009 01:06:39.838921 2280 server.go:1264] "Started kubelet" Oct 9 01:06:39.840420 kubelet[2280]: I1009 01:06:39.840370 2280 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:06:39.842214 kubelet[2280]: I1009 01:06:39.841178 2280 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:06:39.842214 kubelet[2280]: I1009 01:06:39.840815 2280 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:06:39.842214 kubelet[2280]: I1009 01:06:39.840523 2280 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:06:39.842681 kubelet[2280]: I1009 01:06:39.842662 2280 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:06:39.843600 kubelet[2280]: E1009 01:06:39.843458 2280 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca3625b64ed65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:06:39.838891365 +0000 UTC m=+0.639859295,LastTimestamp:2024-10-09 01:06:39.838891365 +0000 UTC m=+0.639859295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:06:39.843683 kubelet[2280]: I1009 01:06:39.843645 2280 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:06:39.843754 kubelet[2280]: I1009 01:06:39.843736 2280 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:06:39.843855 kubelet[2280]: I1009 01:06:39.843837 2280 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:06:39.844394 kubelet[2280]: W1009 01:06:39.844355 2280 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:39.844451 kubelet[2280]: E1009 01:06:39.844400 2280 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:39.844664 kubelet[2280]: I1009 01:06:39.844636 2280 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:06:39.844824 kubelet[2280]: E1009 01:06:39.844798 2280 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:06:39.844824 kubelet[2280]: E1009 01:06:39.844801 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Oct 9 01:06:39.845396 kubelet[2280]: I1009 01:06:39.845380 2280 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:06:39.845396 kubelet[2280]: I1009 01:06:39.845394 2280 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:06:39.863866 kubelet[2280]: I1009 01:06:39.863815 2280 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:06:39.863866 kubelet[2280]: I1009 01:06:39.863834 2280 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:06:39.863866 kubelet[2280]: I1009 01:06:39.863853 2280 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:39.864127 kubelet[2280]: I1009 01:06:39.863878 2280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:06:39.866649 kubelet[2280]: I1009 01:06:39.866587 2280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:06:39.866649 kubelet[2280]: I1009 01:06:39.866658 2280 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:06:39.866806 kubelet[2280]: I1009 01:06:39.866699 2280 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:06:39.866806 kubelet[2280]: E1009 01:06:39.866766 2280 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:06:39.867654 kubelet[2280]: W1009 01:06:39.867584 2280 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:39.867654 kubelet[2280]: E1009 01:06:39.867650 2280 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:39.945444 kubelet[2280]: I1009 01:06:39.945406 2280 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:39.945834 kubelet[2280]: E1009 01:06:39.945798 2280 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 9 01:06:39.967038 kubelet[2280]: E1009 01:06:39.966984 2280 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:06:40.045656 kubelet[2280]: E1009 01:06:40.045540 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Oct 9 01:06:40.148009 kubelet[2280]: I1009 01:06:40.147981 2280 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:40.148452 kubelet[2280]: E1009 01:06:40.148400 2280 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 9 01:06:40.167478 kubelet[2280]: E1009 01:06:40.167450 2280 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:06:40.247086 kubelet[2280]: I1009 01:06:40.247000 2280 policy_none.go:49] "None policy: Start" Oct 9 01:06:40.248226 kubelet[2280]: I1009 01:06:40.247888 2280 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:06:40.248226 kubelet[2280]: I1009 01:06:40.247942 2280 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:06:40.259308 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:06:40.278301 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:06:40.298204 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:06:40.299663 kubelet[2280]: I1009 01:06:40.299623 2280 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:06:40.299937 kubelet[2280]: I1009 01:06:40.299879 2280 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:06:40.300092 kubelet[2280]: I1009 01:06:40.300050 2280 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:06:40.301707 kubelet[2280]: E1009 01:06:40.301673 2280 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 01:06:40.446537 kubelet[2280]: E1009 01:06:40.446474 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Oct 9 01:06:40.550210 kubelet[2280]: I1009 01:06:40.550081 2280 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:40.550447 kubelet[2280]: E1009 01:06:40.550422 2280 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 9 01:06:40.567601 kubelet[2280]: I1009 01:06:40.567564 2280 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:06:40.568621 kubelet[2280]: I1009 01:06:40.568597 2280 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:06:40.569310 kubelet[2280]: I1009 01:06:40.569282 2280 topology_manager.go:215] "Topology Admit Handler" podUID="8a72c7b5340a1692ff36ecfa7d727520" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:06:40.575834 systemd[1]: Created slice kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice - libcontainer container kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice. Oct 9 01:06:40.592285 systemd[1]: Created slice kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice - libcontainer container kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice. Oct 9 01:06:40.607887 systemd[1]: Created slice kubepods-burstable-pod8a72c7b5340a1692ff36ecfa7d727520.slice - libcontainer container kubepods-burstable-pod8a72c7b5340a1692ff36ecfa7d727520.slice. Oct 9 01:06:40.648465 kubelet[2280]: I1009 01:06:40.648406 2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:40.648465 kubelet[2280]: I1009 01:06:40.648446 2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:40.648645 kubelet[2280]: I1009 01:06:40.648471 2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:40.648645 kubelet[2280]: I1009 01:06:40.648503 2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:06:40.648645 kubelet[2280]: I1009 01:06:40.648522 2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:40.648645 kubelet[2280]: I1009 01:06:40.648540 2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:40.648645 kubelet[2280]: I1009 01:06:40.648558 2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a72c7b5340a1692ff36ecfa7d727520-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a72c7b5340a1692ff36ecfa7d727520\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:40.648771 kubelet[2280]: I1009 01:06:40.648578 2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a72c7b5340a1692ff36ecfa7d727520-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a72c7b5340a1692ff36ecfa7d727520\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:40.648771 kubelet[2280]: I1009 01:06:40.648597 2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a72c7b5340a1692ff36ecfa7d727520-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8a72c7b5340a1692ff36ecfa7d727520\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:40.688869 kubelet[2280]: W1009 01:06:40.688843 2280 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:40.688915 kubelet[2280]: E1009 01:06:40.688876 2280 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:40.779183 kubelet[2280]: W1009 01:06:40.779042 2280 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:40.779183 kubelet[2280]: E1009 01:06:40.779187 2280 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:40.873428 kubelet[2280]: W1009 01:06:40.873246 2280 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:40.873428 kubelet[2280]: E1009 01:06:40.873345 2280 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:40.891538 kubelet[2280]: E1009 01:06:40.891500 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:40.892184 containerd[1477]: time="2024-10-09T01:06:40.892129112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:40.905676 kubelet[2280]: E1009 01:06:40.905631 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:40.906371 containerd[1477]: time="2024-10-09T01:06:40.906326898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:40.910593 kubelet[2280]: E1009 01:06:40.910538 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:40.911021 containerd[1477]: time="2024-10-09T01:06:40.910981637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8a72c7b5340a1692ff36ecfa7d727520,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:41.053606 kubelet[2280]: W1009 01:06:41.053506 2280 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:41.053606 kubelet[2280]: E1009 01:06:41.053608 2280 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:41.247777 kubelet[2280]: E1009 01:06:41.247705 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" Oct 9 01:06:41.352259 kubelet[2280]: I1009 01:06:41.352201 2280 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:41.352649 kubelet[2280]: E1009 01:06:41.352611 2280 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 9 01:06:41.433564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224511576.mount: Deactivated successfully. Oct 9 01:06:41.439641 containerd[1477]: time="2024-10-09T01:06:41.439601839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:41.440412 containerd[1477]: time="2024-10-09T01:06:41.440338731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 01:06:41.441303 containerd[1477]: time="2024-10-09T01:06:41.441277111Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:41.444708 containerd[1477]: time="2024-10-09T01:06:41.444649475Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:41.445694 containerd[1477]: time="2024-10-09T01:06:41.445657185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:06:41.446607 containerd[1477]: time="2024-10-09T01:06:41.446574195Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:41.447388 containerd[1477]: time="2024-10-09T01:06:41.447319983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:06:41.450164 containerd[1477]: time="2024-10-09T01:06:41.450111999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:41.451693 containerd[1477]: time="2024-10-09T01:06:41.451663739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 545.2188ms" Oct 9 01:06:41.452688 containerd[1477]: time="2024-10-09T01:06:41.452651141Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.540893ms" Oct 9 01:06:41.453315 containerd[1477]: time="2024-10-09T01:06:41.453235687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.987963ms" Oct 9 01:06:41.754653 containerd[1477]: time="2024-10-09T01:06:41.751217179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:41.755108 containerd[1477]: time="2024-10-09T01:06:41.754665796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:41.755108 containerd[1477]: time="2024-10-09T01:06:41.754685964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:41.755108 containerd[1477]: time="2024-10-09T01:06:41.754808404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:41.756564 containerd[1477]: time="2024-10-09T01:06:41.755056459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:41.756564 containerd[1477]: time="2024-10-09T01:06:41.755142400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:41.756564 containerd[1477]: time="2024-10-09T01:06:41.755169220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:41.756564 containerd[1477]: time="2024-10-09T01:06:41.755258748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:41.758822 containerd[1477]: time="2024-10-09T01:06:41.758720500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:41.758919 containerd[1477]: time="2024-10-09T01:06:41.758868047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:41.759032 containerd[1477]: time="2024-10-09T01:06:41.758940503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:41.759302 containerd[1477]: time="2024-10-09T01:06:41.759198887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:41.790367 systemd[1]: Started cri-containerd-73431ad8beddb2c5b1cba00327db5f1d5060686a340ecb99ee92077d88980048.scope - libcontainer container 73431ad8beddb2c5b1cba00327db5f1d5060686a340ecb99ee92077d88980048. Oct 9 01:06:41.794350 systemd[1]: Started cri-containerd-47499874edf8d631e1d6bc93ecfa0ce27baff20cfb4f51c7430163a9815bad90.scope - libcontainer container 47499874edf8d631e1d6bc93ecfa0ce27baff20cfb4f51c7430163a9815bad90. Oct 9 01:06:41.796847 systemd[1]: Started cri-containerd-ce4ac5109887b9c6ca6aa77198c3adfd0661357c129b204da67d4cbce42b6baf.scope - libcontainer container ce4ac5109887b9c6ca6aa77198c3adfd0661357c129b204da67d4cbce42b6baf. Oct 9 01:06:41.880314 kubelet[2280]: E1009 01:06:41.880268 2280 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.134:6443: connect: connection refused Oct 9 01:06:41.895854 containerd[1477]: time="2024-10-09T01:06:41.895789844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce4ac5109887b9c6ca6aa77198c3adfd0661357c129b204da67d4cbce42b6baf\"" Oct 9 01:06:41.896608 containerd[1477]: time="2024-10-09T01:06:41.896335167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"73431ad8beddb2c5b1cba00327db5f1d5060686a340ecb99ee92077d88980048\"" Oct 9 01:06:41.897328 kubelet[2280]: E1009 01:06:41.897285 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:41.898082 kubelet[2280]: E1009 01:06:41.897999 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:41.900751 containerd[1477]: time="2024-10-09T01:06:41.900721673Z" level=info msg="CreateContainer within sandbox \"ce4ac5109887b9c6ca6aa77198c3adfd0661357c129b204da67d4cbce42b6baf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:06:41.900910 containerd[1477]: time="2024-10-09T01:06:41.900808897Z" level=info msg="CreateContainer within sandbox \"73431ad8beddb2c5b1cba00327db5f1d5060686a340ecb99ee92077d88980048\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:06:41.904606 containerd[1477]: time="2024-10-09T01:06:41.904350038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8a72c7b5340a1692ff36ecfa7d727520,Namespace:kube-system,Attempt:0,} returns sandbox id \"47499874edf8d631e1d6bc93ecfa0ce27baff20cfb4f51c7430163a9815bad90\"" Oct 9 01:06:41.905883 kubelet[2280]: E1009 01:06:41.905826 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:41.908869 containerd[1477]: time="2024-10-09T01:06:41.908826653Z" level=info msg="CreateContainer within sandbox \"47499874edf8d631e1d6bc93ecfa0ce27baff20cfb4f51c7430163a9815bad90\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:06:41.925564 containerd[1477]: time="2024-10-09T01:06:41.925499480Z" level=info msg="CreateContainer within sandbox \"ce4ac5109887b9c6ca6aa77198c3adfd0661357c129b204da67d4cbce42b6baf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed95ea7723b7a1dda031e4c1d19888e531b88f4bce50ac99602055f87e72e52a\"" Oct 9 01:06:41.926318 containerd[1477]: time="2024-10-09T01:06:41.926284653Z" level=info msg="StartContainer for \"ed95ea7723b7a1dda031e4c1d19888e531b88f4bce50ac99602055f87e72e52a\"" Oct 9 01:06:41.931963 containerd[1477]: time="2024-10-09T01:06:41.931921945Z" level=info msg="CreateContainer within sandbox \"73431ad8beddb2c5b1cba00327db5f1d5060686a340ecb99ee92077d88980048\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f259589822486481d456bfe71dc35e6cb0e9332d097827c702e385093e88431\"" Oct 9 01:06:41.932505 containerd[1477]: time="2024-10-09T01:06:41.932471596Z" level=info msg="StartContainer for \"9f259589822486481d456bfe71dc35e6cb0e9332d097827c702e385093e88431\"" Oct 9 01:06:41.934537 containerd[1477]: time="2024-10-09T01:06:41.934502975Z" level=info msg="CreateContainer within sandbox \"47499874edf8d631e1d6bc93ecfa0ce27baff20cfb4f51c7430163a9815bad90\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fac17646816ff5af13d41f09425acd26e83d47a2c4dfba87c201b30ff02a25e7\"" Oct 9 01:06:41.934942 containerd[1477]: time="2024-10-09T01:06:41.934915920Z" level=info msg="StartContainer for \"fac17646816ff5af13d41f09425acd26e83d47a2c4dfba87c201b30ff02a25e7\"" Oct 9 01:06:41.976313 systemd[1]: Started cri-containerd-ed95ea7723b7a1dda031e4c1d19888e531b88f4bce50ac99602055f87e72e52a.scope - libcontainer container ed95ea7723b7a1dda031e4c1d19888e531b88f4bce50ac99602055f87e72e52a. Oct 9 01:06:41.981393 systemd[1]: Started cri-containerd-9f259589822486481d456bfe71dc35e6cb0e9332d097827c702e385093e88431.scope - libcontainer container 9f259589822486481d456bfe71dc35e6cb0e9332d097827c702e385093e88431. Oct 9 01:06:41.983096 systemd[1]: Started cri-containerd-fac17646816ff5af13d41f09425acd26e83d47a2c4dfba87c201b30ff02a25e7.scope - libcontainer container fac17646816ff5af13d41f09425acd26e83d47a2c4dfba87c201b30ff02a25e7. Oct 9 01:06:42.032289 containerd[1477]: time="2024-10-09T01:06:42.029975610Z" level=info msg="StartContainer for \"ed95ea7723b7a1dda031e4c1d19888e531b88f4bce50ac99602055f87e72e52a\" returns successfully" Oct 9 01:06:42.033760 containerd[1477]: time="2024-10-09T01:06:42.033718998Z" level=info msg="StartContainer for \"fac17646816ff5af13d41f09425acd26e83d47a2c4dfba87c201b30ff02a25e7\" returns successfully" Oct 9 01:06:42.045894 containerd[1477]: time="2024-10-09T01:06:42.045050136Z" level=info msg="StartContainer for \"9f259589822486481d456bfe71dc35e6cb0e9332d097827c702e385093e88431\" returns successfully" Oct 9 01:06:42.876913 kubelet[2280]: E1009 01:06:42.876876 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:42.879211 kubelet[2280]: E1009 01:06:42.879182 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:42.880859 kubelet[2280]: E1009 01:06:42.880751 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:42.954228 kubelet[2280]: I1009 01:06:42.954141 2280 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:43.266321 kubelet[2280]: E1009 01:06:43.266262 2280 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 01:06:43.422232 kubelet[2280]: E1009 01:06:43.422103 2280 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fca3625b64ed65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:06:39.838891365 +0000 UTC m=+0.639859295,LastTimestamp:2024-10-09 01:06:39.838891365 +0000 UTC m=+0.639859295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:06:43.423616 kubelet[2280]: I1009 01:06:43.423567 2280 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:06:43.434904 kubelet[2280]: E1009 01:06:43.434704 2280 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:06:43.840185 kubelet[2280]: I1009 01:06:43.840131 2280 apiserver.go:52] "Watching apiserver" Oct 9 01:06:43.844038 kubelet[2280]: I1009 01:06:43.843999 2280 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:06:43.888331 kubelet[2280]: E1009 01:06:43.888279 2280 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:43.888913 kubelet[2280]: E1009 01:06:43.888708 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:44.888647 kubelet[2280]: E1009 01:06:44.888600 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:44.951318 kubelet[2280]: E1009 01:06:44.951270 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:45.107113 systemd[1]: Reloading requested from client PID 2567 ('systemctl') (unit session-7.scope)... Oct 9 01:06:45.107129 systemd[1]: Reloading... Oct 9 01:06:45.194103 zram_generator::config[2609]: No configuration found. Oct 9 01:06:45.299783 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:45.392138 systemd[1]: Reloading finished in 284 ms. Oct 9 01:06:45.437629 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:45.462728 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:06:45.463344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:45.463424 systemd[1]: kubelet.service: Consumed 1.198s CPU time, 118.6M memory peak, 0B memory swap peak. Oct 9 01:06:45.486810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:45.639692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:45.646345 (kubelet)[2651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:06:45.692479 kubelet[2651]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:45.692479 kubelet[2651]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:06:45.692479 kubelet[2651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:45.692943 kubelet[2651]: I1009 01:06:45.692529 2651 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:06:45.697377 kubelet[2651]: I1009 01:06:45.697348 2651 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:06:45.697377 kubelet[2651]: I1009 01:06:45.697369 2651 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:06:45.697535 kubelet[2651]: I1009 01:06:45.697514 2651 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:06:45.698711 kubelet[2651]: I1009 01:06:45.698688 2651 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:06:45.699779 kubelet[2651]: I1009 01:06:45.699735 2651 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:06:45.707473 kubelet[2651]: I1009 01:06:45.707437 2651 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:06:45.708034 kubelet[2651]: I1009 01:06:45.707805 2651 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:06:45.708502 kubelet[2651]: I1009 01:06:45.707861 2651 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:06:45.708602 kubelet[2651]: I1009 01:06:45.708514 2651 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:06:45.708602 kubelet[2651]: I1009 01:06:45.708526 2651 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:06:45.708602 kubelet[2651]: I1009 01:06:45.708579 2651 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:45.708747 kubelet[2651]: I1009 01:06:45.708725 2651 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:06:45.708747 kubelet[2651]: I1009 01:06:45.708741 2651 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:06:45.708796 kubelet[2651]: I1009 01:06:45.708766 2651 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:06:45.708796 kubelet[2651]: I1009 01:06:45.708787 2651 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:06:45.710550 kubelet[2651]: I1009 01:06:45.709663 2651 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:06:45.710550 kubelet[2651]: I1009 01:06:45.710092 2651 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:06:45.710717 kubelet[2651]: I1009 01:06:45.710695 2651 server.go:1264] "Started kubelet" Oct 9 01:06:45.712606 kubelet[2651]: I1009 01:06:45.710813 2651 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:06:45.712606 kubelet[2651]: I1009 01:06:45.711077 2651 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:06:45.712606 kubelet[2651]: I1009 01:06:45.711855 2651 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:06:45.712606 kubelet[2651]: I1009 01:06:45.712466 2651 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:06:45.714493 kubelet[2651]: I1009 01:06:45.714401 2651 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:06:45.716799 kubelet[2651]: E1009 01:06:45.716781 2651 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:06:45.720366 kubelet[2651]: E1009 01:06:45.720308 2651 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:06:45.720366 kubelet[2651]: I1009 01:06:45.720368 2651 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:06:45.720461 kubelet[2651]: I1009 01:06:45.720456 2651 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:06:45.720710 kubelet[2651]: I1009 01:06:45.720685 2651 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:06:45.723316 kubelet[2651]: I1009 01:06:45.723280 2651 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:06:45.723441 kubelet[2651]: I1009 01:06:45.723416 2651 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:06:45.725510 kubelet[2651]: I1009 01:06:45.725185 2651 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:06:45.726526 kubelet[2651]: I1009 01:06:45.726479 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:06:45.728098 kubelet[2651]: I1009 01:06:45.728076 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:06:45.728153 kubelet[2651]: I1009 01:06:45.728113 2651 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:06:45.728153 kubelet[2651]: I1009 01:06:45.728136 2651 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:06:45.728209 kubelet[2651]: E1009 01:06:45.728184 2651 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:06:45.759085 kubelet[2651]: I1009 01:06:45.759014 2651 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:06:45.759085 kubelet[2651]: I1009 01:06:45.759033 2651 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:06:45.759085 kubelet[2651]: I1009 01:06:45.759053 2651 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:45.759300 kubelet[2651]: I1009 01:06:45.759220 2651 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:06:45.759300 kubelet[2651]: I1009 01:06:45.759231 2651 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:06:45.759300 kubelet[2651]: I1009 01:06:45.759250 2651 policy_none.go:49] "None policy: Start" Oct 9 01:06:45.759791 kubelet[2651]: I1009 01:06:45.759773 2651 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:06:45.759828 kubelet[2651]: I1009 01:06:45.759795 2651 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:06:45.759929 kubelet[2651]: I1009 01:06:45.759911 2651 state_mem.go:75] "Updated machine memory state" Oct 9 01:06:45.764480 kubelet[2651]: I1009 01:06:45.764336 2651 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:06:45.764612 kubelet[2651]: I1009 01:06:45.764518 2651 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:06:45.764612 kubelet[2651]: I1009 01:06:45.764609 2651 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:06:45.825383 kubelet[2651]: I1009 01:06:45.825343 2651 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:45.828497 kubelet[2651]: I1009 01:06:45.828453 2651 topology_manager.go:215] "Topology Admit Handler" podUID="8a72c7b5340a1692ff36ecfa7d727520" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:06:45.828585 kubelet[2651]: I1009 01:06:45.828562 2651 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:06:45.828674 kubelet[2651]: I1009 01:06:45.828654 2651 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:06:45.853802 kubelet[2651]: E1009 01:06:45.853767 2651 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:45.854391 kubelet[2651]: E1009 01:06:45.854357 2651 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 9 01:06:45.855323 kubelet[2651]: I1009 01:06:45.855292 2651 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 01:06:45.855444 kubelet[2651]: I1009 01:06:45.855423 2651 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:06:46.021651 kubelet[2651]: I1009 01:06:46.021485 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a72c7b5340a1692ff36ecfa7d727520-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a72c7b5340a1692ff36ecfa7d727520\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:46.021651 kubelet[2651]: I1009 01:06:46.021535 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a72c7b5340a1692ff36ecfa7d727520-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a72c7b5340a1692ff36ecfa7d727520\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:46.021651 kubelet[2651]: I1009 01:06:46.021557 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:46.021651 kubelet[2651]: I1009 01:06:46.021576 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:46.021912 kubelet[2651]: I1009 01:06:46.021641 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:46.021912 kubelet[2651]: I1009 01:06:46.021705 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a72c7b5340a1692ff36ecfa7d727520-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8a72c7b5340a1692ff36ecfa7d727520\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:46.021912 kubelet[2651]: I1009 01:06:46.021728 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:46.021912 kubelet[2651]: I1009 01:06:46.021749 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:46.021912 kubelet[2651]: I1009 01:06:46.021781 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:06:46.155050 kubelet[2651]: E1009 01:06:46.154840 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:46.155050 kubelet[2651]: E1009 01:06:46.154912 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:46.155050 kubelet[2651]: E1009 01:06:46.154966 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:46.710209 kubelet[2651]: I1009 01:06:46.710157 2651 apiserver.go:52] "Watching apiserver" Oct 9 01:06:46.720792 kubelet[2651]: I1009 01:06:46.720735 2651 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:06:46.743847 kubelet[2651]: E1009 01:06:46.743805 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:46.746933 kubelet[2651]: E1009 01:06:46.746853 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:46.748402 kubelet[2651]: E1009 01:06:46.748375 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:46.808505 kubelet[2651]: I1009 01:06:46.808280 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.808255774 podStartE2EDuration="2.808255774s" podCreationTimestamp="2024-10-09 01:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:46.800409789 +0000 UTC m=+1.149563162" watchObservedRunningTime="2024-10-09 01:06:46.808255774 +0000 UTC m=+1.157409137" Oct 9 01:06:46.808505 kubelet[2651]: I1009 01:06:46.808398 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.808394479 podStartE2EDuration="1.808394479s" podCreationTimestamp="2024-10-09 01:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:46.807589762 +0000 UTC m=+1.156743135" watchObservedRunningTime="2024-10-09 01:06:46.808394479 +0000 UTC m=+1.157547852" Oct 9 01:06:46.817641 kubelet[2651]: I1009 01:06:46.817511 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.8174916850000002 podStartE2EDuration="2.817491685s" podCreationTimestamp="2024-10-09 01:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:46.817384881 +0000 UTC m=+1.166538254" watchObservedRunningTime="2024-10-09 01:06:46.817491685 +0000 UTC m=+1.166645058" Oct 9 01:06:47.745585 kubelet[2651]: E1009 01:06:47.745544 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:50.226310 sudo[1648]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:50.228637 sshd[1645]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:50.232346 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:42926.service: Deactivated successfully. Oct 9 01:06:50.235449 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:06:50.235698 systemd[1]: session-7.scope: Consumed 5.686s CPU time, 188.4M memory peak, 0B memory swap peak. Oct 9 01:06:50.237429 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:06:50.238532 systemd-logind[1451]: Removed session 7. Oct 9 01:06:50.976472 kubelet[2651]: E1009 01:06:50.976414 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:53.473863 kubelet[2651]: E1009 01:06:53.473824 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:53.753629 kubelet[2651]: E1009 01:06:53.753503 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:54.657440 kubelet[2651]: E1009 01:06:54.657371 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:54.755729 kubelet[2651]: E1009 01:06:54.755620 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:54.755729 kubelet[2651]: E1009 01:06:54.755640 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:54.828462 update_engine[1453]: I20241009 01:06:54.828328 1453 update_attempter.cc:509] Updating boot flags... Oct 9 01:06:54.861109 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2746) Oct 9 01:06:54.896100 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2750) Oct 9 01:06:54.926089 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2750) Oct 9 01:06:59.069475 kubelet[2651]: I1009 01:06:59.069428 2651 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:06:59.069894 containerd[1477]: time="2024-10-09T01:06:59.069767988Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:06:59.070201 kubelet[2651]: I1009 01:06:59.069965 2651 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:06:59.912869 kubelet[2651]: I1009 01:06:59.912805 2651 topology_manager.go:215] "Topology Admit Handler" podUID="47ed83ed-689b-4c96-b8a4-48f8368e61c4" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-fq75r" Oct 9 01:06:59.922752 systemd[1]: Created slice kubepods-besteffort-pod47ed83ed_689b_4c96_b8a4_48f8368e61c4.slice - libcontainer container kubepods-besteffort-pod47ed83ed_689b_4c96_b8a4_48f8368e61c4.slice. Oct 9 01:07:00.008715 kubelet[2651]: I1009 01:07:00.008671 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47ed83ed-689b-4c96-b8a4-48f8368e61c4-var-lib-calico\") pod \"tigera-operator-77f994b5bb-fq75r\" (UID: \"47ed83ed-689b-4c96-b8a4-48f8368e61c4\") " pod="tigera-operator/tigera-operator-77f994b5bb-fq75r" Oct 9 01:07:00.008715 kubelet[2651]: I1009 01:07:00.008718 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknw4\" (UniqueName: \"kubernetes.io/projected/47ed83ed-689b-4c96-b8a4-48f8368e61c4-kube-api-access-hknw4\") pod \"tigera-operator-77f994b5bb-fq75r\" (UID: \"47ed83ed-689b-4c96-b8a4-48f8368e61c4\") " pod="tigera-operator/tigera-operator-77f994b5bb-fq75r" Oct 9 01:07:00.080990 kubelet[2651]: I1009 01:07:00.080747 2651 topology_manager.go:215] "Topology Admit Handler" podUID="af788ddc-f700-4d88-a718-2a548ef64d29" podNamespace="kube-system" podName="kube-proxy-wpr22" Oct 9 01:07:00.089714 systemd[1]: Created slice kubepods-besteffort-podaf788ddc_f700_4d88_a718_2a548ef64d29.slice - libcontainer container kubepods-besteffort-podaf788ddc_f700_4d88_a718_2a548ef64d29.slice. Oct 9 01:07:00.109267 kubelet[2651]: I1009 01:07:00.109210 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af788ddc-f700-4d88-a718-2a548ef64d29-kube-proxy\") pod \"kube-proxy-wpr22\" (UID: \"af788ddc-f700-4d88-a718-2a548ef64d29\") " pod="kube-system/kube-proxy-wpr22" Oct 9 01:07:00.109267 kubelet[2651]: I1009 01:07:00.109262 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af788ddc-f700-4d88-a718-2a548ef64d29-lib-modules\") pod \"kube-proxy-wpr22\" (UID: \"af788ddc-f700-4d88-a718-2a548ef64d29\") " pod="kube-system/kube-proxy-wpr22" Oct 9 01:07:00.109267 kubelet[2651]: I1009 01:07:00.109291 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af788ddc-f700-4d88-a718-2a548ef64d29-xtables-lock\") pod \"kube-proxy-wpr22\" (UID: \"af788ddc-f700-4d88-a718-2a548ef64d29\") " pod="kube-system/kube-proxy-wpr22" Oct 9 01:07:00.109556 kubelet[2651]: I1009 01:07:00.109308 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f9p5\" (UniqueName: \"kubernetes.io/projected/af788ddc-f700-4d88-a718-2a548ef64d29-kube-api-access-9f9p5\") pod \"kube-proxy-wpr22\" (UID: \"af788ddc-f700-4d88-a718-2a548ef64d29\") " pod="kube-system/kube-proxy-wpr22" Oct 9 01:07:00.234350 containerd[1477]: time="2024-10-09T01:07:00.234301836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-fq75r,Uid:47ed83ed-689b-4c96-b8a4-48f8368e61c4,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:07:00.259778 containerd[1477]: time="2024-10-09T01:07:00.259685797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:00.259876 containerd[1477]: time="2024-10-09T01:07:00.259764265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:00.260544 containerd[1477]: time="2024-10-09T01:07:00.260371222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:00.260544 containerd[1477]: time="2024-10-09T01:07:00.260489014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:00.282237 systemd[1]: Started cri-containerd-eb9faa70b0d4023983570e9826cc766140fe61762a9c6013dcb69d93fdc3f798.scope - libcontainer container eb9faa70b0d4023983570e9826cc766140fe61762a9c6013dcb69d93fdc3f798. Oct 9 01:07:00.319996 containerd[1477]: time="2024-10-09T01:07:00.319960569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-fq75r,Uid:47ed83ed-689b-4c96-b8a4-48f8368e61c4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"eb9faa70b0d4023983570e9826cc766140fe61762a9c6013dcb69d93fdc3f798\"" Oct 9 01:07:00.322998 containerd[1477]: time="2024-10-09T01:07:00.322964394Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:07:00.393252 kubelet[2651]: E1009 01:07:00.393207 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:00.393709 containerd[1477]: time="2024-10-09T01:07:00.393643614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpr22,Uid:af788ddc-f700-4d88-a718-2a548ef64d29,Namespace:kube-system,Attempt:0,}" Oct 9 01:07:00.417472 containerd[1477]: time="2024-10-09T01:07:00.417314788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:00.417472 containerd[1477]: time="2024-10-09T01:07:00.417401291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:00.417472 containerd[1477]: time="2024-10-09T01:07:00.417417281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:00.417643 containerd[1477]: time="2024-10-09T01:07:00.417524204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:00.436230 systemd[1]: Started cri-containerd-d3386cb149befc56d854ff30fc4d6faa6c4dc1f1a4087b9d3d38c4f8b41ea834.scope - libcontainer container d3386cb149befc56d854ff30fc4d6faa6c4dc1f1a4087b9d3d38c4f8b41ea834. Oct 9 01:07:00.459427 containerd[1477]: time="2024-10-09T01:07:00.459396526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpr22,Uid:af788ddc-f700-4d88-a718-2a548ef64d29,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3386cb149befc56d854ff30fc4d6faa6c4dc1f1a4087b9d3d38c4f8b41ea834\"" Oct 9 01:07:00.459921 kubelet[2651]: E1009 01:07:00.459886 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:00.461659 containerd[1477]: time="2024-10-09T01:07:00.461628283Z" level=info msg="CreateContainer within sandbox \"d3386cb149befc56d854ff30fc4d6faa6c4dc1f1a4087b9d3d38c4f8b41ea834\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:07:00.477195 containerd[1477]: time="2024-10-09T01:07:00.477141062Z" level=info msg="CreateContainer within sandbox \"d3386cb149befc56d854ff30fc4d6faa6c4dc1f1a4087b9d3d38c4f8b41ea834\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ccf0cabcd189b13c9520bbe17cfb183a236f912b4b362318c59898cb1563b975\"" Oct 9 01:07:00.477666 containerd[1477]: time="2024-10-09T01:07:00.477623284Z" level=info msg="StartContainer for \"ccf0cabcd189b13c9520bbe17cfb183a236f912b4b362318c59898cb1563b975\"" Oct 9 01:07:00.505206 systemd[1]: Started cri-containerd-ccf0cabcd189b13c9520bbe17cfb183a236f912b4b362318c59898cb1563b975.scope - libcontainer container ccf0cabcd189b13c9520bbe17cfb183a236f912b4b362318c59898cb1563b975. Oct 9 01:07:00.537101 containerd[1477]: time="2024-10-09T01:07:00.536956677Z" level=info msg="StartContainer for \"ccf0cabcd189b13c9520bbe17cfb183a236f912b4b362318c59898cb1563b975\" returns successfully" Oct 9 01:07:00.764783 kubelet[2651]: E1009 01:07:00.764373 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:00.977779 kubelet[2651]: E1009 01:07:00.977746 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:00.985274 kubelet[2651]: I1009 01:07:00.985183 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wpr22" podStartSLOduration=0.985158992 podStartE2EDuration="985.158992ms" podCreationTimestamp="2024-10-09 01:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:07:00.773610947 +0000 UTC m=+15.122764310" watchObservedRunningTime="2024-10-09 01:07:00.985158992 +0000 UTC m=+15.334312365" Oct 9 01:07:01.669961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397275670.mount: Deactivated successfully. Oct 9 01:07:03.148948 containerd[1477]: time="2024-10-09T01:07:03.148883131Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:03.149618 containerd[1477]: time="2024-10-09T01:07:03.149565689Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136517" Oct 9 01:07:03.150680 containerd[1477]: time="2024-10-09T01:07:03.150635037Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:03.152826 containerd[1477]: time="2024-10-09T01:07:03.152795153Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:03.153475 containerd[1477]: time="2024-10-09T01:07:03.153444619Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.83044044s" Oct 9 01:07:03.153475 containerd[1477]: time="2024-10-09T01:07:03.153472902Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 01:07:03.157990 containerd[1477]: time="2024-10-09T01:07:03.157963154Z" level=info msg="CreateContainer within sandbox \"eb9faa70b0d4023983570e9826cc766140fe61762a9c6013dcb69d93fdc3f798\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:07:03.170743 containerd[1477]: time="2024-10-09T01:07:03.170706390Z" level=info msg="CreateContainer within sandbox \"eb9faa70b0d4023983570e9826cc766140fe61762a9c6013dcb69d93fdc3f798\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a378658f57fc9d505f506634588e1134da06eb258edd8d0e5f468bdf163658e1\"" Oct 9 01:07:03.171056 containerd[1477]: time="2024-10-09T01:07:03.171019000Z" level=info msg="StartContainer for \"a378658f57fc9d505f506634588e1134da06eb258edd8d0e5f468bdf163658e1\"" Oct 9 01:07:03.204209 systemd[1]: Started cri-containerd-a378658f57fc9d505f506634588e1134da06eb258edd8d0e5f468bdf163658e1.scope - libcontainer container a378658f57fc9d505f506634588e1134da06eb258edd8d0e5f468bdf163658e1. Oct 9 01:07:03.229933 containerd[1477]: time="2024-10-09T01:07:03.229890606Z" level=info msg="StartContainer for \"a378658f57fc9d505f506634588e1134da06eb258edd8d0e5f468bdf163658e1\" returns successfully" Oct 9 01:07:03.854324 kubelet[2651]: I1009 01:07:03.854258 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-fq75r" podStartSLOduration=2.019898836 podStartE2EDuration="4.854236232s" podCreationTimestamp="2024-10-09 01:06:59 +0000 UTC" firstStartedPulling="2024-10-09 01:07:00.321521939 +0000 UTC m=+14.670675312" lastFinishedPulling="2024-10-09 01:07:03.155859335 +0000 UTC m=+17.505012708" observedRunningTime="2024-10-09 01:07:03.853724687 +0000 UTC m=+18.202878060" watchObservedRunningTime="2024-10-09 01:07:03.854236232 +0000 UTC m=+18.203389605" Oct 9 01:07:06.150617 kubelet[2651]: I1009 01:07:06.149707 2651 topology_manager.go:215] "Topology Admit Handler" podUID="58cf420b-adb9-473c-baad-4d7527089c0f" podNamespace="calico-system" podName="calico-typha-55f65bbf6d-plqdg" Oct 9 01:07:06.167483 systemd[1]: Created slice kubepods-besteffort-pod58cf420b_adb9_473c_baad_4d7527089c0f.slice - libcontainer container kubepods-besteffort-pod58cf420b_adb9_473c_baad_4d7527089c0f.slice. Oct 9 01:07:06.193423 kubelet[2651]: I1009 01:07:06.193367 2651 topology_manager.go:215] "Topology Admit Handler" podUID="924e4aa9-9416-41f7-9e42-8cb3b0645746" podNamespace="calico-system" podName="calico-node-jm5pm" Oct 9 01:07:06.202588 systemd[1]: Created slice kubepods-besteffort-pod924e4aa9_9416_41f7_9e42_8cb3b0645746.slice - libcontainer container kubepods-besteffort-pod924e4aa9_9416_41f7_9e42_8cb3b0645746.slice. Oct 9 01:07:06.249680 kubelet[2651]: I1009 01:07:06.249623 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7jv\" (UniqueName: \"kubernetes.io/projected/58cf420b-adb9-473c-baad-4d7527089c0f-kube-api-access-th7jv\") pod \"calico-typha-55f65bbf6d-plqdg\" (UID: \"58cf420b-adb9-473c-baad-4d7527089c0f\") " pod="calico-system/calico-typha-55f65bbf6d-plqdg" Oct 9 01:07:06.249680 kubelet[2651]: I1009 01:07:06.249669 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/924e4aa9-9416-41f7-9e42-8cb3b0645746-node-certs\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.249680 kubelet[2651]: I1009 01:07:06.249689 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/924e4aa9-9416-41f7-9e42-8cb3b0645746-cni-log-dir\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.249680 kubelet[2651]: I1009 01:07:06.249705 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58cf420b-adb9-473c-baad-4d7527089c0f-tigera-ca-bundle\") pod \"calico-typha-55f65bbf6d-plqdg\" (UID: \"58cf420b-adb9-473c-baad-4d7527089c0f\") " pod="calico-system/calico-typha-55f65bbf6d-plqdg" Oct 9 01:07:06.249962 kubelet[2651]: I1009 01:07:06.249722 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/924e4aa9-9416-41f7-9e42-8cb3b0645746-lib-modules\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.249962 kubelet[2651]: I1009 01:07:06.249798 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/924e4aa9-9416-41f7-9e42-8cb3b0645746-cni-bin-dir\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.249962 kubelet[2651]: I1009 01:07:06.249830 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/924e4aa9-9416-41f7-9e42-8cb3b0645746-policysync\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.249962 kubelet[2651]: I1009 01:07:06.249846 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/924e4aa9-9416-41f7-9e42-8cb3b0645746-cni-net-dir\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.249962 kubelet[2651]: I1009 01:07:06.249863 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/924e4aa9-9416-41f7-9e42-8cb3b0645746-var-lib-calico\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.250159 kubelet[2651]: I1009 01:07:06.249879 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcqx\" (UniqueName: \"kubernetes.io/projected/924e4aa9-9416-41f7-9e42-8cb3b0645746-kube-api-access-dgcqx\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.250159 kubelet[2651]: I1009 01:07:06.249910 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/58cf420b-adb9-473c-baad-4d7527089c0f-typha-certs\") pod \"calico-typha-55f65bbf6d-plqdg\" (UID: \"58cf420b-adb9-473c-baad-4d7527089c0f\") " pod="calico-system/calico-typha-55f65bbf6d-plqdg" Oct 9 01:07:06.250159 kubelet[2651]: I1009 01:07:06.249927 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/924e4aa9-9416-41f7-9e42-8cb3b0645746-xtables-lock\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.250159 kubelet[2651]: I1009 01:07:06.249943 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924e4aa9-9416-41f7-9e42-8cb3b0645746-tigera-ca-bundle\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.250159 kubelet[2651]: I1009 01:07:06.249966 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/924e4aa9-9416-41f7-9e42-8cb3b0645746-flexvol-driver-host\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.250326 kubelet[2651]: I1009 01:07:06.249985 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/924e4aa9-9416-41f7-9e42-8cb3b0645746-var-run-calico\") pod \"calico-node-jm5pm\" (UID: \"924e4aa9-9416-41f7-9e42-8cb3b0645746\") " pod="calico-system/calico-node-jm5pm" Oct 9 01:07:06.305011 kubelet[2651]: I1009 01:07:06.304944 2651 topology_manager.go:215] "Topology Admit Handler" podUID="f3fea10d-7895-4144-8231-a605fca41c0d" podNamespace="calico-system" podName="csi-node-driver-nqtxw" Oct 9 01:07:06.305650 kubelet[2651]: E1009 01:07:06.305586 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqtxw" podUID="f3fea10d-7895-4144-8231-a605fca41c0d" Oct 9 01:07:06.351571 kubelet[2651]: I1009 01:07:06.351277 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3fea10d-7895-4144-8231-a605fca41c0d-socket-dir\") pod \"csi-node-driver-nqtxw\" (UID: \"f3fea10d-7895-4144-8231-a605fca41c0d\") " pod="calico-system/csi-node-driver-nqtxw" Oct 9 01:07:06.351571 kubelet[2651]: I1009 01:07:06.351367 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f3fea10d-7895-4144-8231-a605fca41c0d-varrun\") pod \"csi-node-driver-nqtxw\" (UID: \"f3fea10d-7895-4144-8231-a605fca41c0d\") " pod="calico-system/csi-node-driver-nqtxw" Oct 9 01:07:06.351571 kubelet[2651]: I1009 01:07:06.351389 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3fea10d-7895-4144-8231-a605fca41c0d-kubelet-dir\") pod \"csi-node-driver-nqtxw\" (UID: \"f3fea10d-7895-4144-8231-a605fca41c0d\") " pod="calico-system/csi-node-driver-nqtxw" Oct 9 01:07:06.351571 kubelet[2651]: I1009 01:07:06.351443 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dt7s\" (UniqueName: \"kubernetes.io/projected/f3fea10d-7895-4144-8231-a605fca41c0d-kube-api-access-8dt7s\") pod \"csi-node-driver-nqtxw\" (UID: \"f3fea10d-7895-4144-8231-a605fca41c0d\") " pod="calico-system/csi-node-driver-nqtxw" Oct 9 01:07:06.352159 kubelet[2651]: I1009 01:07:06.351506 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3fea10d-7895-4144-8231-a605fca41c0d-registration-dir\") pod \"csi-node-driver-nqtxw\" (UID: \"f3fea10d-7895-4144-8231-a605fca41c0d\") " pod="calico-system/csi-node-driver-nqtxw" Oct 9 01:07:06.358497 kubelet[2651]: E1009 01:07:06.358439 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.358497 kubelet[2651]: W1009 01:07:06.358491 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.358587 kubelet[2651]: E1009 01:07:06.358529 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.358929 kubelet[2651]: E1009 01:07:06.358899 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.358929 kubelet[2651]: W1009 01:07:06.358920 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.358929 kubelet[2651]: E1009 01:07:06.358953 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.360776 kubelet[2651]: E1009 01:07:06.359392 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.360776 kubelet[2651]: W1009 01:07:06.359409 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.360776 kubelet[2651]: E1009 01:07:06.359477 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.360776 kubelet[2651]: E1009 01:07:06.359722 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.360776 kubelet[2651]: W1009 01:07:06.359734 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.360776 kubelet[2651]: E1009 01:07:06.359829 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.360776 kubelet[2651]: E1009 01:07:06.360580 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.360776 kubelet[2651]: W1009 01:07:06.360597 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.360776 kubelet[2651]: E1009 01:07:06.360686 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.361487 kubelet[2651]: E1009 01:07:06.361459 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.361487 kubelet[2651]: W1009 01:07:06.361478 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.361890 kubelet[2651]: E1009 01:07:06.361857 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.363227 kubelet[2651]: E1009 01:07:06.363201 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.363476 kubelet[2651]: W1009 01:07:06.363288 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.363572 kubelet[2651]: E1009 01:07:06.363554 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.365546 kubelet[2651]: E1009 01:07:06.365488 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.365606 kubelet[2651]: W1009 01:07:06.365555 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.366491 kubelet[2651]: E1009 01:07:06.365755 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.366491 kubelet[2651]: E1009 01:07:06.365984 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.366491 kubelet[2651]: W1009 01:07:06.365995 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.366491 kubelet[2651]: E1009 01:07:06.366239 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.366775 kubelet[2651]: E1009 01:07:06.366578 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.366775 kubelet[2651]: W1009 01:07:06.366590 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.366775 kubelet[2651]: E1009 01:07:06.366617 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.367153 kubelet[2651]: E1009 01:07:06.367096 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.367153 kubelet[2651]: W1009 01:07:06.367113 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.367296 kubelet[2651]: E1009 01:07:06.367257 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.367420 kubelet[2651]: E1009 01:07:06.367406 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.367506 kubelet[2651]: W1009 01:07:06.367422 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.367694 kubelet[2651]: E1009 01:07:06.367617 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.367920 kubelet[2651]: E1009 01:07:06.367763 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.367920 kubelet[2651]: W1009 01:07:06.367915 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.368165 kubelet[2651]: E1009 01:07:06.368044 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.368214 kubelet[2651]: E1009 01:07:06.368181 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.368214 kubelet[2651]: W1009 01:07:06.368192 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.368285 kubelet[2651]: E1009 01:07:06.368221 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.369021 kubelet[2651]: E1009 01:07:06.368406 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.369021 kubelet[2651]: W1009 01:07:06.368456 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.369021 kubelet[2651]: E1009 01:07:06.368575 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.369021 kubelet[2651]: E1009 01:07:06.368704 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.369021 kubelet[2651]: W1009 01:07:06.368714 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.369021 kubelet[2651]: E1009 01:07:06.368815 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.370836 kubelet[2651]: E1009 01:07:06.370817 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.370986 kubelet[2651]: W1009 01:07:06.370903 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.370986 kubelet[2651]: E1009 01:07:06.370975 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.371493 kubelet[2651]: E1009 01:07:06.371369 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.371493 kubelet[2651]: W1009 01:07:06.371384 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.371573 kubelet[2651]: E1009 01:07:06.371503 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.372235 kubelet[2651]: E1009 01:07:06.372208 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.372288 kubelet[2651]: W1009 01:07:06.372253 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.372322 kubelet[2651]: E1009 01:07:06.372295 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.372567 kubelet[2651]: E1009 01:07:06.372543 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.372567 kubelet[2651]: W1009 01:07:06.372561 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.372641 kubelet[2651]: E1009 01:07:06.372609 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.372833 kubelet[2651]: E1009 01:07:06.372810 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.372833 kubelet[2651]: W1009 01:07:06.372827 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.373039 kubelet[2651]: E1009 01:07:06.372934 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.373127 kubelet[2651]: E1009 01:07:06.373105 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.373127 kubelet[2651]: W1009 01:07:06.373126 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.373216 kubelet[2651]: E1009 01:07:06.373180 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.373473 kubelet[2651]: E1009 01:07:06.373445 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.373525 kubelet[2651]: W1009 01:07:06.373484 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.373598 kubelet[2651]: E1009 01:07:06.373573 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.373882 kubelet[2651]: E1009 01:07:06.373863 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.373994 kubelet[2651]: W1009 01:07:06.373959 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.374212 kubelet[2651]: E1009 01:07:06.374048 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.374675 kubelet[2651]: E1009 01:07:06.374533 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.374675 kubelet[2651]: W1009 01:07:06.374550 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.374675 kubelet[2651]: E1009 01:07:06.374605 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.374891 kubelet[2651]: E1009 01:07:06.374868 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.374891 kubelet[2651]: W1009 01:07:06.374881 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.374983 kubelet[2651]: E1009 01:07:06.374954 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.375234 kubelet[2651]: E1009 01:07:06.375206 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.375234 kubelet[2651]: W1009 01:07:06.375222 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.375307 kubelet[2651]: E1009 01:07:06.375290 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.375490 kubelet[2651]: E1009 01:07:06.375471 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.375490 kubelet[2651]: W1009 01:07:06.375487 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.375582 kubelet[2651]: E1009 01:07:06.375525 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.375789 kubelet[2651]: E1009 01:07:06.375770 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.375789 kubelet[2651]: W1009 01:07:06.375785 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.375931 kubelet[2651]: E1009 01:07:06.375889 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.377288 kubelet[2651]: E1009 01:07:06.377166 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.377288 kubelet[2651]: W1009 01:07:06.377184 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.377288 kubelet[2651]: E1009 01:07:06.377264 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.378141 kubelet[2651]: E1009 01:07:06.377497 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.378141 kubelet[2651]: W1009 01:07:06.377513 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.378141 kubelet[2651]: E1009 01:07:06.377603 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.378399 kubelet[2651]: E1009 01:07:06.378323 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.378399 kubelet[2651]: W1009 01:07:06.378332 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.378399 kubelet[2651]: E1009 01:07:06.378380 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.380446 kubelet[2651]: E1009 01:07:06.380422 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.380446 kubelet[2651]: W1009 01:07:06.380438 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.380535 kubelet[2651]: E1009 01:07:06.380453 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.381806 kubelet[2651]: E1009 01:07:06.381785 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.381806 kubelet[2651]: W1009 01:07:06.381801 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.381886 kubelet[2651]: E1009 01:07:06.381812 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.383328 kubelet[2651]: E1009 01:07:06.382754 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.383328 kubelet[2651]: W1009 01:07:06.382771 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.383328 kubelet[2651]: E1009 01:07:06.382786 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.390272 kubelet[2651]: E1009 01:07:06.390247 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.390463 kubelet[2651]: W1009 01:07:06.390370 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.390463 kubelet[2651]: E1009 01:07:06.390405 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.453708 kubelet[2651]: E1009 01:07:06.453661 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.453708 kubelet[2651]: W1009 01:07:06.453690 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.453708 kubelet[2651]: E1009 01:07:06.453714 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.453973 kubelet[2651]: E1009 01:07:06.453915 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.453973 kubelet[2651]: W1009 01:07:06.453923 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.453973 kubelet[2651]: E1009 01:07:06.453933 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.454186 kubelet[2651]: E1009 01:07:06.454160 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.454186 kubelet[2651]: W1009 01:07:06.454179 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.454259 kubelet[2651]: E1009 01:07:06.454196 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.454403 kubelet[2651]: E1009 01:07:06.454383 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.454403 kubelet[2651]: W1009 01:07:06.454397 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.454472 kubelet[2651]: E1009 01:07:06.454413 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.454622 kubelet[2651]: E1009 01:07:06.454601 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.454622 kubelet[2651]: W1009 01:07:06.454615 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.454694 kubelet[2651]: E1009 01:07:06.454632 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.455391 kubelet[2651]: E1009 01:07:06.455351 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.455391 kubelet[2651]: W1009 01:07:06.455379 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.455488 kubelet[2651]: E1009 01:07:06.455410 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.456087 kubelet[2651]: E1009 01:07:06.456040 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.456087 kubelet[2651]: W1009 01:07:06.456056 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.456233 kubelet[2651]: E1009 01:07:06.456156 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.456317 kubelet[2651]: E1009 01:07:06.456278 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.456317 kubelet[2651]: W1009 01:07:06.456286 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.456379 kubelet[2651]: E1009 01:07:06.456352 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.456928 kubelet[2651]: E1009 01:07:06.456888 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.456928 kubelet[2651]: W1009 01:07:06.456907 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.457032 kubelet[2651]: E1009 01:07:06.456972 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.457201 kubelet[2651]: E1009 01:07:06.457174 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.457201 kubelet[2651]: W1009 01:07:06.457190 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.457301 kubelet[2651]: E1009 01:07:06.457264 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461237 kubelet[2651]: E1009 01:07:06.458098 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461237 kubelet[2651]: W1009 01:07:06.458124 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461237 kubelet[2651]: E1009 01:07:06.458167 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461237 kubelet[2651]: E1009 01:07:06.458320 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461237 kubelet[2651]: W1009 01:07:06.458328 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461237 kubelet[2651]: E1009 01:07:06.458363 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461237 kubelet[2651]: E1009 01:07:06.458514 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461237 kubelet[2651]: W1009 01:07:06.458523 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461237 kubelet[2651]: E1009 01:07:06.458563 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461237 kubelet[2651]: E1009 01:07:06.458709 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461552 kubelet[2651]: W1009 01:07:06.458716 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461552 kubelet[2651]: E1009 01:07:06.458746 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461552 kubelet[2651]: E1009 01:07:06.458896 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461552 kubelet[2651]: W1009 01:07:06.458904 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461552 kubelet[2651]: E1009 01:07:06.458936 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461552 kubelet[2651]: E1009 01:07:06.460369 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461552 kubelet[2651]: W1009 01:07:06.460379 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461552 kubelet[2651]: E1009 01:07:06.460399 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461552 kubelet[2651]: E1009 01:07:06.460647 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461552 kubelet[2651]: W1009 01:07:06.460655 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461775 kubelet[2651]: E1009 01:07:06.460689 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461775 kubelet[2651]: E1009 01:07:06.460884 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461775 kubelet[2651]: W1009 01:07:06.460893 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461775 kubelet[2651]: E1009 01:07:06.460988 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461775 kubelet[2651]: E1009 01:07:06.461279 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461775 kubelet[2651]: W1009 01:07:06.461288 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461775 kubelet[2651]: E1009 01:07:06.461378 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461775 kubelet[2651]: E1009 01:07:06.461502 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461775 kubelet[2651]: W1009 01:07:06.461509 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461775 kubelet[2651]: E1009 01:07:06.461563 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.461983 kubelet[2651]: E1009 01:07:06.461818 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.461983 kubelet[2651]: W1009 01:07:06.461827 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.461983 kubelet[2651]: E1009 01:07:06.461928 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.463342 kubelet[2651]: E1009 01:07:06.463315 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.463342 kubelet[2651]: W1009 01:07:06.463333 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.463467 kubelet[2651]: E1009 01:07:06.463430 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.463588 kubelet[2651]: E1009 01:07:06.463556 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.463588 kubelet[2651]: W1009 01:07:06.463572 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.465145 kubelet[2651]: E1009 01:07:06.465108 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.465223 kubelet[2651]: E1009 01:07:06.465199 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.465223 kubelet[2651]: W1009 01:07:06.465215 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.465319 kubelet[2651]: E1009 01:07:06.465296 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.465538 kubelet[2651]: E1009 01:07:06.465514 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.465538 kubelet[2651]: W1009 01:07:06.465529 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.465538 kubelet[2651]: E1009 01:07:06.465539 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.474302 kubelet[2651]: E1009 01:07:06.474257 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:06.474828 containerd[1477]: time="2024-10-09T01:07:06.474778331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55f65bbf6d-plqdg,Uid:58cf420b-adb9-473c-baad-4d7527089c0f,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:06.475310 kubelet[2651]: E1009 01:07:06.475192 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:06.475310 kubelet[2651]: W1009 01:07:06.475205 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:06.475310 kubelet[2651]: E1009 01:07:06.475219 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:06.506519 kubelet[2651]: E1009 01:07:06.506453 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:06.508727 containerd[1477]: time="2024-10-09T01:07:06.508313150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:06.508727 containerd[1477]: time="2024-10-09T01:07:06.508397419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:06.508727 containerd[1477]: time="2024-10-09T01:07:06.508411816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:06.508727 containerd[1477]: time="2024-10-09T01:07:06.508508508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:06.508727 containerd[1477]: time="2024-10-09T01:07:06.508664331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jm5pm,Uid:924e4aa9-9416-41f7-9e42-8cb3b0645746,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:06.528303 systemd[1]: Started cri-containerd-a47193949b3283b52f2b5aca3bf4052399a95897fedf4c1a0a5e2a45a0fe54fa.scope - libcontainer container a47193949b3283b52f2b5aca3bf4052399a95897fedf4c1a0a5e2a45a0fe54fa. Oct 9 01:07:06.539965 containerd[1477]: time="2024-10-09T01:07:06.539630309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:06.539965 containerd[1477]: time="2024-10-09T01:07:06.539692957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:06.539965 containerd[1477]: time="2024-10-09T01:07:06.539708757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:06.539965 containerd[1477]: time="2024-10-09T01:07:06.539807654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:06.566526 systemd[1]: Started cri-containerd-3e7a5946230453c21bce998eac7c294bc0596762f9336e3f3f083d42f8f6d3e3.scope - libcontainer container 3e7a5946230453c21bce998eac7c294bc0596762f9336e3f3f083d42f8f6d3e3. Oct 9 01:07:06.579586 containerd[1477]: time="2024-10-09T01:07:06.579493296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55f65bbf6d-plqdg,Uid:58cf420b-adb9-473c-baad-4d7527089c0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a47193949b3283b52f2b5aca3bf4052399a95897fedf4c1a0a5e2a45a0fe54fa\"" Oct 9 01:07:06.580540 kubelet[2651]: E1009 01:07:06.580511 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:06.582136 containerd[1477]: time="2024-10-09T01:07:06.582093577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:07:06.596745 containerd[1477]: time="2024-10-09T01:07:06.596696321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jm5pm,Uid:924e4aa9-9416-41f7-9e42-8cb3b0645746,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e7a5946230453c21bce998eac7c294bc0596762f9336e3f3f083d42f8f6d3e3\"" Oct 9 01:07:06.597349 kubelet[2651]: E1009 01:07:06.597326 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:08.344331 containerd[1477]: time="2024-10-09T01:07:08.344280604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:08.345161 containerd[1477]: time="2024-10-09T01:07:08.345130797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 01:07:08.346368 containerd[1477]: time="2024-10-09T01:07:08.346324796Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:08.348364 containerd[1477]: time="2024-10-09T01:07:08.348317779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:08.348890 containerd[1477]: time="2024-10-09T01:07:08.348863698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 1.766736969s" Oct 9 01:07:08.348922 containerd[1477]: time="2024-10-09T01:07:08.348889627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 01:07:08.349839 containerd[1477]: time="2024-10-09T01:07:08.349810031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:07:08.357186 containerd[1477]: time="2024-10-09T01:07:08.357125060Z" level=info msg="CreateContainer within sandbox \"a47193949b3283b52f2b5aca3bf4052399a95897fedf4c1a0a5e2a45a0fe54fa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:07:08.372442 containerd[1477]: time="2024-10-09T01:07:08.372405744Z" level=info msg="CreateContainer within sandbox \"a47193949b3283b52f2b5aca3bf4052399a95897fedf4c1a0a5e2a45a0fe54fa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f2e1ddd3b00e653598a1a4d3a802f4f85525281f8428b95f06ee39a8ad09ce4a\"" Oct 9 01:07:08.372858 containerd[1477]: time="2024-10-09T01:07:08.372823311Z" level=info msg="StartContainer for \"f2e1ddd3b00e653598a1a4d3a802f4f85525281f8428b95f06ee39a8ad09ce4a\"" Oct 9 01:07:08.404747 systemd[1]: Started cri-containerd-f2e1ddd3b00e653598a1a4d3a802f4f85525281f8428b95f06ee39a8ad09ce4a.scope - libcontainer container f2e1ddd3b00e653598a1a4d3a802f4f85525281f8428b95f06ee39a8ad09ce4a. Oct 9 01:07:08.449883 containerd[1477]: time="2024-10-09T01:07:08.449829920Z" level=info msg="StartContainer for \"f2e1ddd3b00e653598a1a4d3a802f4f85525281f8428b95f06ee39a8ad09ce4a\" returns successfully" Oct 9 01:07:08.728879 kubelet[2651]: E1009 01:07:08.728828 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqtxw" podUID="f3fea10d-7895-4144-8231-a605fca41c0d" Oct 9 01:07:08.781856 kubelet[2651]: E1009 01:07:08.781823 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:08.789848 kubelet[2651]: I1009 01:07:08.789737 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55f65bbf6d-plqdg" podStartSLOduration=1.021750879 podStartE2EDuration="2.789719519s" podCreationTimestamp="2024-10-09 01:07:06 +0000 UTC" firstStartedPulling="2024-10-09 01:07:06.581734139 +0000 UTC m=+20.930887512" lastFinishedPulling="2024-10-09 01:07:08.349702779 +0000 UTC m=+22.698856152" observedRunningTime="2024-10-09 01:07:08.789168933 +0000 UTC m=+23.138322316" watchObservedRunningTime="2024-10-09 01:07:08.789719519 +0000 UTC m=+23.138872892" Oct 9 01:07:08.860507 kubelet[2651]: E1009 01:07:08.860469 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.860507 kubelet[2651]: W1009 01:07:08.860493 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.860507 kubelet[2651]: E1009 01:07:08.860515 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.860736 kubelet[2651]: E1009 01:07:08.860717 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.860736 kubelet[2651]: W1009 01:07:08.860731 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.860797 kubelet[2651]: E1009 01:07:08.860740 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.860953 kubelet[2651]: E1009 01:07:08.860925 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.860953 kubelet[2651]: W1009 01:07:08.860948 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.861002 kubelet[2651]: E1009 01:07:08.860958 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.861181 kubelet[2651]: E1009 01:07:08.861154 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.861181 kubelet[2651]: W1009 01:07:08.861175 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.861298 kubelet[2651]: E1009 01:07:08.861186 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.861417 kubelet[2651]: E1009 01:07:08.861398 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.861417 kubelet[2651]: W1009 01:07:08.861411 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.861521 kubelet[2651]: E1009 01:07:08.861421 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.861653 kubelet[2651]: E1009 01:07:08.861635 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.861653 kubelet[2651]: W1009 01:07:08.861647 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.861739 kubelet[2651]: E1009 01:07:08.861658 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.861866 kubelet[2651]: E1009 01:07:08.861850 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.861866 kubelet[2651]: W1009 01:07:08.861863 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.861950 kubelet[2651]: E1009 01:07:08.861874 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.862130 kubelet[2651]: E1009 01:07:08.862102 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.862185 kubelet[2651]: W1009 01:07:08.862147 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.862185 kubelet[2651]: E1009 01:07:08.862163 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.862406 kubelet[2651]: E1009 01:07:08.862374 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.862406 kubelet[2651]: W1009 01:07:08.862386 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.862406 kubelet[2651]: E1009 01:07:08.862397 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.862632 kubelet[2651]: E1009 01:07:08.862597 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.862632 kubelet[2651]: W1009 01:07:08.862622 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.862632 kubelet[2651]: E1009 01:07:08.862632 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.862856 kubelet[2651]: E1009 01:07:08.862832 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.862856 kubelet[2651]: W1009 01:07:08.862843 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.862856 kubelet[2651]: E1009 01:07:08.862854 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.863089 kubelet[2651]: E1009 01:07:08.863048 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.863089 kubelet[2651]: W1009 01:07:08.863073 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.863179 kubelet[2651]: E1009 01:07:08.863094 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.863313 kubelet[2651]: E1009 01:07:08.863288 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.863313 kubelet[2651]: W1009 01:07:08.863300 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.863313 kubelet[2651]: E1009 01:07:08.863311 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.863525 kubelet[2651]: E1009 01:07:08.863501 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.863525 kubelet[2651]: W1009 01:07:08.863513 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.863525 kubelet[2651]: E1009 01:07:08.863524 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.863729 kubelet[2651]: E1009 01:07:08.863712 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.863729 kubelet[2651]: W1009 01:07:08.863726 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.863806 kubelet[2651]: E1009 01:07:08.863736 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.871027 kubelet[2651]: E1009 01:07:08.871006 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.871027 kubelet[2651]: W1009 01:07:08.871022 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.871127 kubelet[2651]: E1009 01:07:08.871034 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.871325 kubelet[2651]: E1009 01:07:08.871306 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.871325 kubelet[2651]: W1009 01:07:08.871320 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.871387 kubelet[2651]: E1009 01:07:08.871336 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.871577 kubelet[2651]: E1009 01:07:08.871558 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.871577 kubelet[2651]: W1009 01:07:08.871571 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.871636 kubelet[2651]: E1009 01:07:08.871586 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.871786 kubelet[2651]: E1009 01:07:08.871770 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.871818 kubelet[2651]: W1009 01:07:08.871785 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.871818 kubelet[2651]: E1009 01:07:08.871802 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.872007 kubelet[2651]: E1009 01:07:08.871994 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.872007 kubelet[2651]: W1009 01:07:08.872006 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.872110 kubelet[2651]: E1009 01:07:08.872021 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.872265 kubelet[2651]: E1009 01:07:08.872251 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.872297 kubelet[2651]: W1009 01:07:08.872264 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.872297 kubelet[2651]: E1009 01:07:08.872278 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.872490 kubelet[2651]: E1009 01:07:08.872473 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.872518 kubelet[2651]: W1009 01:07:08.872491 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.872518 kubelet[2651]: E1009 01:07:08.872509 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.872714 kubelet[2651]: E1009 01:07:08.872697 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.872714 kubelet[2651]: W1009 01:07:08.872710 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.872764 kubelet[2651]: E1009 01:07:08.872724 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.872924 kubelet[2651]: E1009 01:07:08.872911 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.872924 kubelet[2651]: W1009 01:07:08.872921 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.872985 kubelet[2651]: E1009 01:07:08.872934 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.873170 kubelet[2651]: E1009 01:07:08.873153 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.873170 kubelet[2651]: W1009 01:07:08.873166 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.873262 kubelet[2651]: E1009 01:07:08.873182 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.873411 kubelet[2651]: E1009 01:07:08.873388 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.873411 kubelet[2651]: W1009 01:07:08.873400 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.873485 kubelet[2651]: E1009 01:07:08.873413 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.873675 kubelet[2651]: E1009 01:07:08.873660 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.873675 kubelet[2651]: W1009 01:07:08.873672 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.873744 kubelet[2651]: E1009 01:07:08.873685 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.873897 kubelet[2651]: E1009 01:07:08.873881 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.873897 kubelet[2651]: W1009 01:07:08.873892 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.873973 kubelet[2651]: E1009 01:07:08.873904 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.874161 kubelet[2651]: E1009 01:07:08.874147 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.874161 kubelet[2651]: W1009 01:07:08.874158 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.874234 kubelet[2651]: E1009 01:07:08.874172 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.874363 kubelet[2651]: E1009 01:07:08.874347 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.874363 kubelet[2651]: W1009 01:07:08.874359 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.874424 kubelet[2651]: E1009 01:07:08.874373 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.874571 kubelet[2651]: E1009 01:07:08.874553 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.874571 kubelet[2651]: W1009 01:07:08.874566 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.874676 kubelet[2651]: E1009 01:07:08.874581 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.874885 kubelet[2651]: E1009 01:07:08.874866 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.874885 kubelet[2651]: W1009 01:07:08.874881 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.874957 kubelet[2651]: E1009 01:07:08.874899 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:08.875167 kubelet[2651]: E1009 01:07:08.875151 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:08.875167 kubelet[2651]: W1009 01:07:08.875165 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:08.875228 kubelet[2651]: E1009 01:07:08.875176 2651 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:09.730598 containerd[1477]: time="2024-10-09T01:07:09.730531950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:09.731878 containerd[1477]: time="2024-10-09T01:07:09.731716982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 01:07:09.733181 containerd[1477]: time="2024-10-09T01:07:09.733144330Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:09.735582 containerd[1477]: time="2024-10-09T01:07:09.735502901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:09.736547 containerd[1477]: time="2024-10-09T01:07:09.736512783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.386667436s" Oct 9 01:07:09.736607 containerd[1477]: time="2024-10-09T01:07:09.736552698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 01:07:09.739135 containerd[1477]: time="2024-10-09T01:07:09.738860225Z" level=info msg="CreateContainer within sandbox \"3e7a5946230453c21bce998eac7c294bc0596762f9336e3f3f083d42f8f6d3e3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:07:09.752654 containerd[1477]: time="2024-10-09T01:07:09.752616240Z" level=info msg="CreateContainer within sandbox \"3e7a5946230453c21bce998eac7c294bc0596762f9336e3f3f083d42f8f6d3e3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c\"" Oct 9 01:07:09.753120 containerd[1477]: time="2024-10-09T01:07:09.753101063Z" level=info msg="StartContainer for \"c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c\"" Oct 9 01:07:09.778657 systemd[1]: run-containerd-runc-k8s.io-c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c-runc.0mm1wY.mount: Deactivated successfully. Oct 9 01:07:09.783853 kubelet[2651]: I1009 01:07:09.783825 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:07:09.790581 kubelet[2651]: E1009 01:07:09.784458 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:09.790236 systemd[1]: Started cri-containerd-c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c.scope - libcontainer container c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c. Oct 9 01:07:09.820392 containerd[1477]: time="2024-10-09T01:07:09.820338675Z" level=info msg="StartContainer for \"c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c\" returns successfully" Oct 9 01:07:09.832706 systemd[1]: cri-containerd-c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c.scope: Deactivated successfully. Oct 9 01:07:10.200976 containerd[1477]: time="2024-10-09T01:07:10.200901483Z" level=info msg="shim disconnected" id=c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c namespace=k8s.io Oct 9 01:07:10.200976 containerd[1477]: time="2024-10-09T01:07:10.200961676Z" level=warning msg="cleaning up after shim disconnected" id=c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c namespace=k8s.io Oct 9 01:07:10.200976 containerd[1477]: time="2024-10-09T01:07:10.200972828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:07:10.728668 kubelet[2651]: E1009 01:07:10.728597 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqtxw" podUID="f3fea10d-7895-4144-8231-a605fca41c0d" Oct 9 01:07:10.749019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c83e0ffd280b77d20f02fde9df19a17ff4ca62d6e3da8be13ef738670aa98e0c-rootfs.mount: Deactivated successfully. Oct 9 01:07:10.787424 kubelet[2651]: E1009 01:07:10.787387 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:10.787938 containerd[1477]: time="2024-10-09T01:07:10.787896286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:07:12.729297 kubelet[2651]: E1009 01:07:12.729232 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqtxw" podUID="f3fea10d-7895-4144-8231-a605fca41c0d" Oct 9 01:07:14.037137 kubelet[2651]: I1009 01:07:14.036939 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:07:14.037657 kubelet[2651]: E1009 01:07:14.037608 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:14.936751 kubelet[2651]: E1009 01:07:14.936690 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqtxw" podUID="f3fea10d-7895-4144-8231-a605fca41c0d" Oct 9 01:07:14.944632 kubelet[2651]: E1009 01:07:14.944593 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:14.951936 containerd[1477]: time="2024-10-09T01:07:14.951884020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:14.952634 containerd[1477]: time="2024-10-09T01:07:14.952585910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 01:07:14.953861 containerd[1477]: time="2024-10-09T01:07:14.953810433Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:14.957179 containerd[1477]: time="2024-10-09T01:07:14.957132522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:14.957873 containerd[1477]: time="2024-10-09T01:07:14.957840714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.169906026s" Oct 9 01:07:14.957948 containerd[1477]: time="2024-10-09T01:07:14.957873546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 01:07:14.959988 containerd[1477]: time="2024-10-09T01:07:14.959934333Z" level=info msg="CreateContainer within sandbox \"3e7a5946230453c21bce998eac7c294bc0596762f9336e3f3f083d42f8f6d3e3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:07:14.975311 containerd[1477]: time="2024-10-09T01:07:14.975265641Z" level=info msg="CreateContainer within sandbox \"3e7a5946230453c21bce998eac7c294bc0596762f9336e3f3f083d42f8f6d3e3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"090f852519a43d390d2c7a05409889d52b3a6c6d6f85c29b84e81a291a868768\"" Oct 9 01:07:14.975823 containerd[1477]: time="2024-10-09T01:07:14.975766774Z" level=info msg="StartContainer for \"090f852519a43d390d2c7a05409889d52b3a6c6d6f85c29b84e81a291a868768\"" Oct 9 01:07:15.010211 systemd[1]: Started cri-containerd-090f852519a43d390d2c7a05409889d52b3a6c6d6f85c29b84e81a291a868768.scope - libcontainer container 090f852519a43d390d2c7a05409889d52b3a6c6d6f85c29b84e81a291a868768. Oct 9 01:07:15.118816 containerd[1477]: time="2024-10-09T01:07:15.118758595Z" level=info msg="StartContainer for \"090f852519a43d390d2c7a05409889d52b3a6c6d6f85c29b84e81a291a868768\" returns successfully" Oct 9 01:07:15.947728 kubelet[2651]: E1009 01:07:15.947691 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:16.482524 containerd[1477]: time="2024-10-09T01:07:16.482472189Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:07:16.485584 systemd[1]: cri-containerd-090f852519a43d390d2c7a05409889d52b3a6c6d6f85c29b84e81a291a868768.scope: Deactivated successfully. Oct 9 01:07:16.507983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-090f852519a43d390d2c7a05409889d52b3a6c6d6f85c29b84e81a291a868768-rootfs.mount: Deactivated successfully. Oct 9 01:07:16.578554 kubelet[2651]: I1009 01:07:16.578517 2651 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:07:16.690753 containerd[1477]: time="2024-10-09T01:07:16.690684571Z" level=info msg="shim disconnected" id=090f852519a43d390d2c7a05409889d52b3a6c6d6f85c29b84e81a291a868768 namespace=k8s.io Oct 9 01:07:16.690753 containerd[1477]: time="2024-10-09T01:07:16.690738693Z" level=warning msg="cleaning up after shim disconnected" id=090f852519a43d390d2c7a05409889d52b3a6c6d6f85c29b84e81a291a868768 namespace=k8s.io Oct 9 01:07:16.690753 containerd[1477]: time="2024-10-09T01:07:16.690746628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:07:16.734887 systemd[1]: Created slice kubepods-besteffort-podf3fea10d_7895_4144_8231_a605fca41c0d.slice - libcontainer container kubepods-besteffort-podf3fea10d_7895_4144_8231_a605fca41c0d.slice. Oct 9 01:07:16.746853 containerd[1477]: time="2024-10-09T01:07:16.746810117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nqtxw,Uid:f3fea10d-7895-4144-8231-a605fca41c0d,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:16.841214 kubelet[2651]: I1009 01:07:16.840361 2651 topology_manager.go:215] "Topology Admit Handler" podUID="bb07e656-ab01-414e-908e-42ef81b5409e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tlbzf" Oct 9 01:07:16.841214 kubelet[2651]: I1009 01:07:16.840574 2651 topology_manager.go:215] "Topology Admit Handler" podUID="ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8wbhx" Oct 9 01:07:16.841214 kubelet[2651]: I1009 01:07:16.840664 2651 topology_manager.go:215] "Topology Admit Handler" podUID="53d2281e-817c-4a93-8bb3-9fd28be2e647" podNamespace="calico-system" podName="calico-kube-controllers-597569b566-v8fgc" Oct 9 01:07:16.869437 systemd[1]: Created slice kubepods-besteffort-pod53d2281e_817c_4a93_8bb3_9fd28be2e647.slice - libcontainer container kubepods-besteffort-pod53d2281e_817c_4a93_8bb3_9fd28be2e647.slice. Oct 9 01:07:16.877479 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:43848.service - OpenSSH per-connection server daemon (10.0.0.1:43848). Oct 9 01:07:16.890252 systemd[1]: Created slice kubepods-burstable-podac9b6f9b_e14c_4e2f_b736_e1f48d1c156b.slice - libcontainer container kubepods-burstable-podac9b6f9b_e14c_4e2f_b736_e1f48d1c156b.slice. Oct 9 01:07:16.895104 systemd[1]: Created slice kubepods-burstable-podbb07e656_ab01_414e_908e_42ef81b5409e.slice - libcontainer container kubepods-burstable-podbb07e656_ab01_414e_908e_42ef81b5409e.slice. Oct 9 01:07:16.918598 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 43848 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:16.920852 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:16.926202 systemd-logind[1451]: New session 8 of user core. Oct 9 01:07:16.931531 containerd[1477]: time="2024-10-09T01:07:16.931488427Z" level=error msg="Failed to destroy network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:16.932570 containerd[1477]: time="2024-10-09T01:07:16.932518103Z" level=error msg="encountered an error cleaning up failed sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:16.932614 containerd[1477]: time="2024-10-09T01:07:16.932592473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nqtxw,Uid:f3fea10d-7895-4144-8231-a605fca41c0d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:16.932901 kubelet[2651]: E1009 01:07:16.932849 2651 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:16.933128 kubelet[2651]: E1009 01:07:16.932923 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nqtxw" Oct 9 01:07:16.933128 kubelet[2651]: E1009 01:07:16.932959 2651 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nqtxw" Oct 9 01:07:16.933128 kubelet[2651]: E1009 01:07:16.933006 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nqtxw_calico-system(f3fea10d-7895-4144-8231-a605fca41c0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nqtxw_calico-system(f3fea10d-7895-4144-8231-a605fca41c0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nqtxw" podUID="f3fea10d-7895-4144-8231-a605fca41c0d" Oct 9 01:07:16.933274 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:07:16.936194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42-shm.mount: Deactivated successfully. Oct 9 01:07:16.945106 kubelet[2651]: I1009 01:07:16.945035 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53d2281e-817c-4a93-8bb3-9fd28be2e647-tigera-ca-bundle\") pod \"calico-kube-controllers-597569b566-v8fgc\" (UID: \"53d2281e-817c-4a93-8bb3-9fd28be2e647\") " pod="calico-system/calico-kube-controllers-597569b566-v8fgc" Oct 9 01:07:16.945176 kubelet[2651]: I1009 01:07:16.945118 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b-config-volume\") pod \"coredns-7db6d8ff4d-8wbhx\" (UID: \"ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b\") " pod="kube-system/coredns-7db6d8ff4d-8wbhx" Oct 9 01:07:16.945176 kubelet[2651]: I1009 01:07:16.945140 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2gvf\" (UniqueName: \"kubernetes.io/projected/ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b-kube-api-access-x2gvf\") pod \"coredns-7db6d8ff4d-8wbhx\" (UID: \"ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b\") " pod="kube-system/coredns-7db6d8ff4d-8wbhx" Oct 9 01:07:16.945176 kubelet[2651]: I1009 01:07:16.945157 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57nm9\" (UniqueName: \"kubernetes.io/projected/bb07e656-ab01-414e-908e-42ef81b5409e-kube-api-access-57nm9\") pod \"coredns-7db6d8ff4d-tlbzf\" (UID: \"bb07e656-ab01-414e-908e-42ef81b5409e\") " pod="kube-system/coredns-7db6d8ff4d-tlbzf" Oct 9 01:07:16.945176 kubelet[2651]: I1009 01:07:16.945173 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb07e656-ab01-414e-908e-42ef81b5409e-config-volume\") pod \"coredns-7db6d8ff4d-tlbzf\" (UID: \"bb07e656-ab01-414e-908e-42ef81b5409e\") " pod="kube-system/coredns-7db6d8ff4d-tlbzf" Oct 9 01:07:16.945328 kubelet[2651]: I1009 01:07:16.945268 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvcz\" (UniqueName: \"kubernetes.io/projected/53d2281e-817c-4a93-8bb3-9fd28be2e647-kube-api-access-dwvcz\") pod \"calico-kube-controllers-597569b566-v8fgc\" (UID: \"53d2281e-817c-4a93-8bb3-9fd28be2e647\") " pod="calico-system/calico-kube-controllers-597569b566-v8fgc" Oct 9 01:07:16.949898 kubelet[2651]: I1009 01:07:16.949872 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:16.951014 containerd[1477]: time="2024-10-09T01:07:16.950828753Z" level=info msg="StopPodSandbox for \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\"" Oct 9 01:07:16.951332 containerd[1477]: time="2024-10-09T01:07:16.951130240Z" level=info msg="Ensure that sandbox 5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42 in task-service has been cleanup successfully" Oct 9 01:07:16.952428 kubelet[2651]: E1009 01:07:16.952397 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:16.953477 containerd[1477]: time="2024-10-09T01:07:16.953244056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:07:16.979542 containerd[1477]: time="2024-10-09T01:07:16.979479708Z" level=error msg="StopPodSandbox for \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\" failed" error="failed to destroy network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:16.979838 kubelet[2651]: E1009 01:07:16.979774 2651 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:16.979989 kubelet[2651]: E1009 01:07:16.979853 2651 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42"} Oct 9 01:07:16.979989 kubelet[2651]: E1009 01:07:16.979916 2651 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3fea10d-7895-4144-8231-a605fca41c0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:16.979989 kubelet[2651]: E1009 01:07:16.979954 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3fea10d-7895-4144-8231-a605fca41c0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nqtxw" podUID="f3fea10d-7895-4144-8231-a605fca41c0d" Oct 9 01:07:17.058542 sshd[3439]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:17.066560 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:43848.service: Deactivated successfully. Oct 9 01:07:17.069031 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:07:17.071129 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:07:17.072621 systemd-logind[1451]: Removed session 8. Oct 9 01:07:17.187150 containerd[1477]: time="2024-10-09T01:07:17.187097033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597569b566-v8fgc,Uid:53d2281e-817c-4a93-8bb3-9fd28be2e647,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:17.195610 kubelet[2651]: E1009 01:07:17.195576 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:17.196032 containerd[1477]: time="2024-10-09T01:07:17.195984181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8wbhx,Uid:ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b,Namespace:kube-system,Attempt:0,}" Oct 9 01:07:17.197345 kubelet[2651]: E1009 01:07:17.197291 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:17.197900 containerd[1477]: time="2024-10-09T01:07:17.197855018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tlbzf,Uid:bb07e656-ab01-414e-908e-42ef81b5409e,Namespace:kube-system,Attempt:0,}" Oct 9 01:07:17.262186 containerd[1477]: time="2024-10-09T01:07:17.262129266Z" level=error msg="Failed to destroy network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.263462 containerd[1477]: time="2024-10-09T01:07:17.263435192Z" level=error msg="encountered an error cleaning up failed sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.263519 containerd[1477]: time="2024-10-09T01:07:17.263490806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597569b566-v8fgc,Uid:53d2281e-817c-4a93-8bb3-9fd28be2e647,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.263783 kubelet[2651]: E1009 01:07:17.263734 2651 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.263838 kubelet[2651]: E1009 01:07:17.263798 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-597569b566-v8fgc" Oct 9 01:07:17.263838 kubelet[2651]: E1009 01:07:17.263821 2651 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-597569b566-v8fgc" Oct 9 01:07:17.263887 kubelet[2651]: E1009 01:07:17.263864 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-597569b566-v8fgc_calico-system(53d2281e-817c-4a93-8bb3-9fd28be2e647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-597569b566-v8fgc_calico-system(53d2281e-817c-4a93-8bb3-9fd28be2e647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-597569b566-v8fgc" podUID="53d2281e-817c-4a93-8bb3-9fd28be2e647" Oct 9 01:07:17.267910 containerd[1477]: time="2024-10-09T01:07:17.267859389Z" level=error msg="Failed to destroy network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.268733 containerd[1477]: time="2024-10-09T01:07:17.268658542Z" level=error msg="encountered an error cleaning up failed sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.268733 containerd[1477]: time="2024-10-09T01:07:17.268707523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tlbzf,Uid:bb07e656-ab01-414e-908e-42ef81b5409e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.268978 kubelet[2651]: E1009 01:07:17.268880 2651 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.268978 kubelet[2651]: E1009 01:07:17.268957 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tlbzf" Oct 9 01:07:17.269076 kubelet[2651]: E1009 01:07:17.268979 2651 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tlbzf" Oct 9 01:07:17.269076 kubelet[2651]: E1009 01:07:17.269021 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tlbzf_kube-system(bb07e656-ab01-414e-908e-42ef81b5409e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tlbzf_kube-system(bb07e656-ab01-414e-908e-42ef81b5409e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tlbzf" podUID="bb07e656-ab01-414e-908e-42ef81b5409e" Oct 9 01:07:17.270298 containerd[1477]: time="2024-10-09T01:07:17.270247899Z" level=error msg="Failed to destroy network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.270669 containerd[1477]: time="2024-10-09T01:07:17.270631300Z" level=error msg="encountered an error cleaning up failed sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.270743 containerd[1477]: time="2024-10-09T01:07:17.270684230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8wbhx,Uid:ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.271047 kubelet[2651]: E1009 01:07:17.270940 2651 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.271047 kubelet[2651]: E1009 01:07:17.270973 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8wbhx" Oct 9 01:07:17.271047 kubelet[2651]: E1009 01:07:17.270988 2651 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8wbhx" Oct 9 01:07:17.271177 kubelet[2651]: E1009 01:07:17.271015 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8wbhx_kube-system(ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8wbhx_kube-system(ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8wbhx" podUID="ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b" Oct 9 01:07:17.954629 kubelet[2651]: I1009 01:07:17.954587 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:17.955299 containerd[1477]: time="2024-10-09T01:07:17.955227538Z" level=info msg="StopPodSandbox for \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\"" Oct 9 01:07:17.955617 containerd[1477]: time="2024-10-09T01:07:17.955421393Z" level=info msg="Ensure that sandbox 2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e in task-service has been cleanup successfully" Oct 9 01:07:17.956217 kubelet[2651]: I1009 01:07:17.956193 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:17.956641 containerd[1477]: time="2024-10-09T01:07:17.956607262Z" level=info msg="StopPodSandbox for \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\"" Oct 9 01:07:17.956769 containerd[1477]: time="2024-10-09T01:07:17.956751093Z" level=info msg="Ensure that sandbox f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8 in task-service has been cleanup successfully" Oct 9 01:07:17.958554 kubelet[2651]: I1009 01:07:17.958524 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:17.959666 containerd[1477]: time="2024-10-09T01:07:17.958901305Z" level=info msg="StopPodSandbox for \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\"" Oct 9 01:07:17.959666 containerd[1477]: time="2024-10-09T01:07:17.959101111Z" level=info msg="Ensure that sandbox 6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45 in task-service has been cleanup successfully" Oct 9 01:07:17.987765 containerd[1477]: time="2024-10-09T01:07:17.987685861Z" level=error msg="StopPodSandbox for \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\" failed" error="failed to destroy network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.988116 kubelet[2651]: E1009 01:07:17.988031 2651 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:17.988227 kubelet[2651]: E1009 01:07:17.988137 2651 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8"} Oct 9 01:07:17.988227 kubelet[2651]: E1009 01:07:17.988199 2651 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb07e656-ab01-414e-908e-42ef81b5409e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:17.988334 kubelet[2651]: E1009 01:07:17.988232 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb07e656-ab01-414e-908e-42ef81b5409e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tlbzf" podUID="bb07e656-ab01-414e-908e-42ef81b5409e" Oct 9 01:07:17.990206 containerd[1477]: time="2024-10-09T01:07:17.990154332Z" level=error msg="StopPodSandbox for \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\" failed" error="failed to destroy network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.990890 kubelet[2651]: E1009 01:07:17.990656 2651 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:17.990890 kubelet[2651]: E1009 01:07:17.990713 2651 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45"} Oct 9 01:07:17.990890 kubelet[2651]: E1009 01:07:17.990782 2651 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53d2281e-817c-4a93-8bb3-9fd28be2e647\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:17.990890 kubelet[2651]: E1009 01:07:17.990812 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53d2281e-817c-4a93-8bb3-9fd28be2e647\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-597569b566-v8fgc" podUID="53d2281e-817c-4a93-8bb3-9fd28be2e647" Oct 9 01:07:17.991213 containerd[1477]: time="2024-10-09T01:07:17.990896597Z" level=error msg="StopPodSandbox for \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\" failed" error="failed to destroy network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:17.991420 kubelet[2651]: E1009 01:07:17.991374 2651 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:17.991420 kubelet[2651]: E1009 01:07:17.991416 2651 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e"} Oct 9 01:07:17.991500 kubelet[2651]: E1009 01:07:17.991444 2651 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:17.991500 kubelet[2651]: E1009 01:07:17.991469 2651 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8wbhx" podUID="ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b" Oct 9 01:07:20.629461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount749391496.mount: Deactivated successfully. Oct 9 01:07:21.110363 containerd[1477]: time="2024-10-09T01:07:21.110293344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:21.111038 containerd[1477]: time="2024-10-09T01:07:21.110980394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 01:07:21.112091 containerd[1477]: time="2024-10-09T01:07:21.112029616Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:21.113947 containerd[1477]: time="2024-10-09T01:07:21.113909057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:21.114506 containerd[1477]: time="2024-10-09T01:07:21.114464621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.16118546s" Oct 9 01:07:21.114506 containerd[1477]: time="2024-10-09T01:07:21.114493856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 01:07:21.126007 containerd[1477]: time="2024-10-09T01:07:21.125868058Z" level=info msg="CreateContainer within sandbox \"3e7a5946230453c21bce998eac7c294bc0596762f9336e3f3f083d42f8f6d3e3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:07:21.151859 containerd[1477]: time="2024-10-09T01:07:21.151807227Z" level=info msg="CreateContainer within sandbox \"3e7a5946230453c21bce998eac7c294bc0596762f9336e3f3f083d42f8f6d3e3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f96342adfd5578751c92ec7b29f99fdc43609ff4d3a9ecb14b27fdb60c5c923d\"" Oct 9 01:07:21.152414 containerd[1477]: time="2024-10-09T01:07:21.152365486Z" level=info msg="StartContainer for \"f96342adfd5578751c92ec7b29f99fdc43609ff4d3a9ecb14b27fdb60c5c923d\"" Oct 9 01:07:21.232214 systemd[1]: Started cri-containerd-f96342adfd5578751c92ec7b29f99fdc43609ff4d3a9ecb14b27fdb60c5c923d.scope - libcontainer container f96342adfd5578751c92ec7b29f99fdc43609ff4d3a9ecb14b27fdb60c5c923d. Oct 9 01:07:21.271321 containerd[1477]: time="2024-10-09T01:07:21.271266758Z" level=info msg="StartContainer for \"f96342adfd5578751c92ec7b29f99fdc43609ff4d3a9ecb14b27fdb60c5c923d\" returns successfully" Oct 9 01:07:21.337099 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:07:21.337293 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:07:21.972322 kubelet[2651]: E1009 01:07:21.972287 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:21.984555 kubelet[2651]: I1009 01:07:21.984478 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jm5pm" podStartSLOduration=1.4671928890000001 podStartE2EDuration="15.984457996s" podCreationTimestamp="2024-10-09 01:07:06 +0000 UTC" firstStartedPulling="2024-10-09 01:07:06.597898097 +0000 UTC m=+20.947051470" lastFinishedPulling="2024-10-09 01:07:21.115163204 +0000 UTC m=+35.464316577" observedRunningTime="2024-10-09 01:07:21.983669354 +0000 UTC m=+36.332822727" watchObservedRunningTime="2024-10-09 01:07:21.984457996 +0000 UTC m=+36.333611369" Oct 9 01:07:22.067493 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:43852.service - OpenSSH per-connection server daemon (10.0.0.1:43852). Oct 9 01:07:22.107985 sshd[3760]: Accepted publickey for core from 10.0.0.1 port 43852 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:22.109619 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:22.113692 systemd-logind[1451]: New session 9 of user core. Oct 9 01:07:22.128241 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:07:22.270639 sshd[3760]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:22.275267 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:43852.service: Deactivated successfully. Oct 9 01:07:22.277742 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:07:22.278392 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:07:22.279345 systemd-logind[1451]: Removed session 9. Oct 9 01:07:22.777107 kernel: bpftool[3900]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:07:22.973632 kubelet[2651]: I1009 01:07:22.973584 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:07:22.974457 kubelet[2651]: E1009 01:07:22.974294 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:23.024007 systemd-networkd[1404]: vxlan.calico: Link UP Oct 9 01:07:23.024019 systemd-networkd[1404]: vxlan.calico: Gained carrier Oct 9 01:07:24.486258 systemd-networkd[1404]: vxlan.calico: Gained IPv6LL Oct 9 01:07:27.286308 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:45990.service - OpenSSH per-connection server daemon (10.0.0.1:45990). Oct 9 01:07:27.325912 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 45990 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:27.327600 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:27.332126 systemd-logind[1451]: New session 10 of user core. Oct 9 01:07:27.340182 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:07:27.475139 sshd[3978]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:27.486095 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:45990.service: Deactivated successfully. Oct 9 01:07:27.488246 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:07:27.490128 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:07:27.501490 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:46004.service - OpenSSH per-connection server daemon (10.0.0.1:46004). Oct 9 01:07:27.502474 systemd-logind[1451]: Removed session 10. Oct 9 01:07:27.533578 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 46004 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:27.535261 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:27.539269 systemd-logind[1451]: New session 11 of user core. Oct 9 01:07:27.546199 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:07:27.684690 sshd[3993]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:27.694198 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:46004.service: Deactivated successfully. Oct 9 01:07:27.696140 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:07:27.698496 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:07:27.707689 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:46010.service - OpenSSH per-connection server daemon (10.0.0.1:46010). Oct 9 01:07:27.711313 systemd-logind[1451]: Removed session 11. Oct 9 01:07:27.742571 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 46010 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:27.744433 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:27.748549 systemd-logind[1451]: New session 12 of user core. Oct 9 01:07:27.759269 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:07:27.876222 sshd[4005]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:27.880965 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:46010.service: Deactivated successfully. Oct 9 01:07:27.883302 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:07:27.883974 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:07:27.884891 systemd-logind[1451]: Removed session 12. Oct 9 01:07:28.729383 containerd[1477]: time="2024-10-09T01:07:28.729214415Z" level=info msg="StopPodSandbox for \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\"" Oct 9 01:07:28.729383 containerd[1477]: time="2024-10-09T01:07:28.729261545Z" level=info msg="StopPodSandbox for \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\"" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.786 [INFO][4058] k8s.go 608: Cleaning up netns ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.787 [INFO][4058] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" iface="eth0" netns="/var/run/netns/cni-126a7e37-c014-44de-b90e-9b1724b1a2ee" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.787 [INFO][4058] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" iface="eth0" netns="/var/run/netns/cni-126a7e37-c014-44de-b90e-9b1724b1a2ee" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.787 [INFO][4058] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" iface="eth0" netns="/var/run/netns/cni-126a7e37-c014-44de-b90e-9b1724b1a2ee" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.787 [INFO][4058] k8s.go 615: Releasing IP address(es) ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.787 [INFO][4058] utils.go 188: Calico CNI releasing IP address ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.840 [INFO][4075] ipam_plugin.go 417: Releasing address using handleID ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" HandleID="k8s-pod-network.f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.840 [INFO][4075] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.840 [INFO][4075] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.847 [WARNING][4075] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" HandleID="k8s-pod-network.f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.848 [INFO][4075] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" HandleID="k8s-pod-network.f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.849 [INFO][4075] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:28.854090 containerd[1477]: 2024-10-09 01:07:28.851 [INFO][4058] k8s.go 621: Teardown processing complete. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:28.855007 containerd[1477]: time="2024-10-09T01:07:28.854954322Z" level=info msg="TearDown network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\" successfully" Oct 9 01:07:28.855007 containerd[1477]: time="2024-10-09T01:07:28.854990690Z" level=info msg="StopPodSandbox for \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\" returns successfully" Oct 9 01:07:28.855565 kubelet[2651]: E1009 01:07:28.855530 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:28.858453 containerd[1477]: time="2024-10-09T01:07:28.858403869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tlbzf,Uid:bb07e656-ab01-414e-908e-42ef81b5409e,Namespace:kube-system,Attempt:1,}" Oct 9 01:07:28.859650 systemd[1]: run-netns-cni\x2d126a7e37\x2dc014\x2d44de\x2db90e\x2d9b1724b1a2ee.mount: Deactivated successfully. Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.786 [INFO][4057] k8s.go 608: Cleaning up netns ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.786 [INFO][4057] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" iface="eth0" netns="/var/run/netns/cni-39af654d-78c2-0adf-cfcb-7b6c1c59ad75" Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.786 [INFO][4057] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" iface="eth0" netns="/var/run/netns/cni-39af654d-78c2-0adf-cfcb-7b6c1c59ad75" Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.787 [INFO][4057] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" iface="eth0" netns="/var/run/netns/cni-39af654d-78c2-0adf-cfcb-7b6c1c59ad75" Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.787 [INFO][4057] k8s.go 615: Releasing IP address(es) ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.787 [INFO][4057] utils.go 188: Calico CNI releasing IP address ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.840 [INFO][4073] ipam_plugin.go 417: Releasing address using handleID ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" HandleID="k8s-pod-network.5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.840 [INFO][4073] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.849 [INFO][4073] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.855 [WARNING][4073] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" HandleID="k8s-pod-network.5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.858 [INFO][4073] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" HandleID="k8s-pod-network.5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.860 [INFO][4073] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:28.865989 containerd[1477]: 2024-10-09 01:07:28.863 [INFO][4057] k8s.go 621: Teardown processing complete. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:28.866637 containerd[1477]: time="2024-10-09T01:07:28.866193315Z" level=info msg="TearDown network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\" successfully" Oct 9 01:07:28.866637 containerd[1477]: time="2024-10-09T01:07:28.866221799Z" level=info msg="StopPodSandbox for \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\" returns successfully" Oct 9 01:07:28.866847 containerd[1477]: time="2024-10-09T01:07:28.866823499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nqtxw,Uid:f3fea10d-7895-4144-8231-a605fca41c0d,Namespace:calico-system,Attempt:1,}" Oct 9 01:07:28.869504 systemd[1]: run-netns-cni\x2d39af654d\x2d78c2\x2d0adf\x2dcfcb\x2d7b6c1c59ad75.mount: Deactivated successfully. Oct 9 01:07:29.148936 systemd-networkd[1404]: calib1347abfe77: Link UP Oct 9 01:07:29.153818 systemd-networkd[1404]: calib1347abfe77: Gained carrier Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.063 [INFO][4090] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nqtxw-eth0 csi-node-driver- calico-system f3fea10d-7895-4144-8231-a605fca41c0d 841 0 2024-10-09 01:07:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-nqtxw eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib1347abfe77 [] []}} ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Namespace="calico-system" Pod="csi-node-driver-nqtxw" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqtxw-" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.063 [INFO][4090] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Namespace="calico-system" Pod="csi-node-driver-nqtxw" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.090 [INFO][4117] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" HandleID="k8s-pod-network.2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.099 [INFO][4117] ipam_plugin.go 270: Auto assigning IP ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" HandleID="k8s-pod-network.2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027ddc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nqtxw", "timestamp":"2024-10-09 01:07:29.090292825 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.099 [INFO][4117] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.099 [INFO][4117] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.099 [INFO][4117] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.100 [INFO][4117] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" host="localhost" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.105 [INFO][4117] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.109 [INFO][4117] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.110 [INFO][4117] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.112 [INFO][4117] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.112 [INFO][4117] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" host="localhost" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.113 [INFO][4117] ipam.go 1685: Creating new handle: k8s-pod-network.2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.121 [INFO][4117] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" host="localhost" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.140 [INFO][4117] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" host="localhost" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.140 [INFO][4117] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" host="localhost" Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.140 [INFO][4117] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.167919 containerd[1477]: 2024-10-09 01:07:29.140 [INFO][4117] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" HandleID="k8s-pod-network.2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:29.168527 containerd[1477]: 2024-10-09 01:07:29.143 [INFO][4090] k8s.go 386: Populated endpoint ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Namespace="calico-system" Pod="csi-node-driver-nqtxw" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqtxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nqtxw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3fea10d-7895-4144-8231-a605fca41c0d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nqtxw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib1347abfe77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.168527 containerd[1477]: 2024-10-09 01:07:29.143 [INFO][4090] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Namespace="calico-system" Pod="csi-node-driver-nqtxw" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:29.168527 containerd[1477]: 2024-10-09 01:07:29.143 [INFO][4090] dataplane_linux.go 68: Setting the host side veth name to calib1347abfe77 ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Namespace="calico-system" Pod="csi-node-driver-nqtxw" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:29.168527 containerd[1477]: 2024-10-09 01:07:29.149 [INFO][4090] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Namespace="calico-system" Pod="csi-node-driver-nqtxw" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:29.168527 containerd[1477]: 2024-10-09 01:07:29.149 [INFO][4090] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Namespace="calico-system" Pod="csi-node-driver-nqtxw" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqtxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nqtxw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3fea10d-7895-4144-8231-a605fca41c0d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada", Pod:"csi-node-driver-nqtxw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib1347abfe77", MAC:"fa:70:78:50:2e:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.168527 containerd[1477]: 2024-10-09 01:07:29.164 [INFO][4090] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada" Namespace="calico-system" Pod="csi-node-driver-nqtxw" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:29.235221 systemd-networkd[1404]: cali7272d19aad5: Link UP Oct 9 01:07:29.236005 systemd-networkd[1404]: cali7272d19aad5: Gained carrier Oct 9 01:07:29.246248 containerd[1477]: time="2024-10-09T01:07:29.245650672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:29.246248 containerd[1477]: time="2024-10-09T01:07:29.245739849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:29.246248 containerd[1477]: time="2024-10-09T01:07:29.245758804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:29.246248 containerd[1477]: time="2024-10-09T01:07:29.245942380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:29.290297 systemd[1]: Started cri-containerd-2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada.scope - libcontainer container 2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada. Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.079 [INFO][4103] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0 coredns-7db6d8ff4d- kube-system bb07e656-ab01-414e-908e-42ef81b5409e 840 0 2024-10-09 01:06:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-tlbzf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7272d19aad5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlbzf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlbzf-" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.079 [INFO][4103] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlbzf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.111 [INFO][4124] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" HandleID="k8s-pod-network.94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.123 [INFO][4124] ipam_plugin.go 270: Auto assigning IP ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" HandleID="k8s-pod-network.94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5d60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-tlbzf", "timestamp":"2024-10-09 01:07:29.111836434 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.123 [INFO][4124] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.140 [INFO][4124] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.140 [INFO][4124] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.142 [INFO][4124] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" host="localhost" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.148 [INFO][4124] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.154 [INFO][4124] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.156 [INFO][4124] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.162 [INFO][4124] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.162 [INFO][4124] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" host="localhost" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.164 [INFO][4124] ipam.go 1685: Creating new handle: k8s-pod-network.94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60 Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.203 [INFO][4124] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" host="localhost" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.227 [INFO][4124] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" host="localhost" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.227 [INFO][4124] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" host="localhost" Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.227 [INFO][4124] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.296297 containerd[1477]: 2024-10-09 01:07:29.227 [INFO][4124] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" HandleID="k8s-pod-network.94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:29.297769 containerd[1477]: 2024-10-09 01:07:29.232 [INFO][4103] k8s.go 386: Populated endpoint ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlbzf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb07e656-ab01-414e-908e-42ef81b5409e", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-tlbzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7272d19aad5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.297769 containerd[1477]: 2024-10-09 01:07:29.232 [INFO][4103] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlbzf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:29.297769 containerd[1477]: 2024-10-09 01:07:29.232 [INFO][4103] dataplane_linux.go 68: Setting the host side veth name to cali7272d19aad5 ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlbzf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:29.297769 containerd[1477]: 2024-10-09 01:07:29.236 [INFO][4103] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlbzf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:29.297769 containerd[1477]: 2024-10-09 01:07:29.237 [INFO][4103] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlbzf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb07e656-ab01-414e-908e-42ef81b5409e", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60", Pod:"coredns-7db6d8ff4d-tlbzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7272d19aad5", MAC:"62:6d:06:72:0a:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.297769 containerd[1477]: 2024-10-09 01:07:29.287 [INFO][4103] k8s.go 500: Wrote updated endpoint to datastore ContainerID="94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlbzf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:29.319134 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:07:29.332010 containerd[1477]: time="2024-10-09T01:07:29.331710991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:29.332010 containerd[1477]: time="2024-10-09T01:07:29.331792394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:29.332010 containerd[1477]: time="2024-10-09T01:07:29.331833531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:29.332738 containerd[1477]: time="2024-10-09T01:07:29.332006977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:29.338193 containerd[1477]: time="2024-10-09T01:07:29.338120736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nqtxw,Uid:f3fea10d-7895-4144-8231-a605fca41c0d,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada\"" Oct 9 01:07:29.340685 containerd[1477]: time="2024-10-09T01:07:29.340436914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:07:29.360218 systemd[1]: Started cri-containerd-94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60.scope - libcontainer container 94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60. Oct 9 01:07:29.376030 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:07:29.408359 containerd[1477]: time="2024-10-09T01:07:29.408232151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tlbzf,Uid:bb07e656-ab01-414e-908e-42ef81b5409e,Namespace:kube-system,Attempt:1,} returns sandbox id \"94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60\"" Oct 9 01:07:29.409587 kubelet[2651]: E1009 01:07:29.409049 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:29.411759 containerd[1477]: time="2024-10-09T01:07:29.411533049Z" level=info msg="CreateContainer within sandbox \"94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:07:29.429983 containerd[1477]: time="2024-10-09T01:07:29.429917466Z" level=info msg="CreateContainer within sandbox \"94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f27bd511414c081a48d397010944be7bb368807387561bb097b7a96db80ab68\"" Oct 9 01:07:29.430478 containerd[1477]: time="2024-10-09T01:07:29.430445517Z" level=info msg="StartContainer for \"6f27bd511414c081a48d397010944be7bb368807387561bb097b7a96db80ab68\"" Oct 9 01:07:29.463250 systemd[1]: Started cri-containerd-6f27bd511414c081a48d397010944be7bb368807387561bb097b7a96db80ab68.scope - libcontainer container 6f27bd511414c081a48d397010944be7bb368807387561bb097b7a96db80ab68. Oct 9 01:07:29.493842 containerd[1477]: time="2024-10-09T01:07:29.493795067Z" level=info msg="StartContainer for \"6f27bd511414c081a48d397010944be7bb368807387561bb097b7a96db80ab68\" returns successfully" Oct 9 01:07:29.999102 kubelet[2651]: E1009 01:07:29.998213 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:30.007915 kubelet[2651]: I1009 01:07:30.007848 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tlbzf" podStartSLOduration=31.007826427 podStartE2EDuration="31.007826427s" podCreationTimestamp="2024-10-09 01:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:07:30.006423304 +0000 UTC m=+44.355576677" watchObservedRunningTime="2024-10-09 01:07:30.007826427 +0000 UTC m=+44.356979800" Oct 9 01:07:30.438232 systemd-networkd[1404]: calib1347abfe77: Gained IPv6LL Oct 9 01:07:30.907467 containerd[1477]: time="2024-10-09T01:07:30.907331591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:30.908196 containerd[1477]: time="2024-10-09T01:07:30.908138776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 01:07:30.909395 containerd[1477]: time="2024-10-09T01:07:30.909350391Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:30.911582 containerd[1477]: time="2024-10-09T01:07:30.911545992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:30.912186 containerd[1477]: time="2024-10-09T01:07:30.912155086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.571688986s" Oct 9 01:07:30.912186 containerd[1477]: time="2024-10-09T01:07:30.912183709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 01:07:30.914356 containerd[1477]: time="2024-10-09T01:07:30.914326974Z" level=info msg="CreateContainer within sandbox \"2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:07:30.936843 containerd[1477]: time="2024-10-09T01:07:30.936792670Z" level=info msg="CreateContainer within sandbox \"2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f39f6efb08d946143889fb9b705695f44872d6732f0676aa64b8827460f4d664\"" Oct 9 01:07:30.937328 containerd[1477]: time="2024-10-09T01:07:30.937303389Z" level=info msg="StartContainer for \"f39f6efb08d946143889fb9b705695f44872d6732f0676aa64b8827460f4d664\"" Oct 9 01:07:30.976218 systemd[1]: Started cri-containerd-f39f6efb08d946143889fb9b705695f44872d6732f0676aa64b8827460f4d664.scope - libcontainer container f39f6efb08d946143889fb9b705695f44872d6732f0676aa64b8827460f4d664. Oct 9 01:07:31.002488 kubelet[2651]: E1009 01:07:31.002442 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:31.011140 containerd[1477]: time="2024-10-09T01:07:31.009812244Z" level=info msg="StartContainer for \"f39f6efb08d946143889fb9b705695f44872d6732f0676aa64b8827460f4d664\" returns successfully" Oct 9 01:07:31.011306 containerd[1477]: time="2024-10-09T01:07:31.011281182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:07:31.142219 systemd-networkd[1404]: cali7272d19aad5: Gained IPv6LL Oct 9 01:07:31.729279 containerd[1477]: time="2024-10-09T01:07:31.729181673Z" level=info msg="StopPodSandbox for \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\"" Oct 9 01:07:31.730117 containerd[1477]: time="2024-10-09T01:07:31.729705987Z" level=info msg="StopPodSandbox for \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\"" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.880 [INFO][4362] k8s.go 608: Cleaning up netns ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.880 [INFO][4362] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" iface="eth0" netns="/var/run/netns/cni-84f4c157-e883-7e19-9b8b-d2526448178d" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.881 [INFO][4362] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" iface="eth0" netns="/var/run/netns/cni-84f4c157-e883-7e19-9b8b-d2526448178d" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.881 [INFO][4362] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" iface="eth0" netns="/var/run/netns/cni-84f4c157-e883-7e19-9b8b-d2526448178d" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.881 [INFO][4362] k8s.go 615: Releasing IP address(es) ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.881 [INFO][4362] utils.go 188: Calico CNI releasing IP address ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.907 [INFO][4378] ipam_plugin.go 417: Releasing address using handleID ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" HandleID="k8s-pod-network.2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.907 [INFO][4378] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.907 [INFO][4378] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.963 [WARNING][4378] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" HandleID="k8s-pod-network.2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.963 [INFO][4378] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" HandleID="k8s-pod-network.2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.965 [INFO][4378] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:31.969831 containerd[1477]: 2024-10-09 01:07:31.967 [INFO][4362] k8s.go 621: Teardown processing complete. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:31.970661 containerd[1477]: time="2024-10-09T01:07:31.970042260Z" level=info msg="TearDown network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\" successfully" Oct 9 01:07:31.970661 containerd[1477]: time="2024-10-09T01:07:31.970085301Z" level=info msg="StopPodSandbox for \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\" returns successfully" Oct 9 01:07:31.970718 kubelet[2651]: E1009 01:07:31.970490 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:31.973091 containerd[1477]: time="2024-10-09T01:07:31.971652883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8wbhx,Uid:ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b,Namespace:kube-system,Attempt:1,}" Oct 9 01:07:31.972864 systemd[1]: run-netns-cni\x2d84f4c157\x2de883\x2d7e19\x2d9b8b\x2dd2526448178d.mount: Deactivated successfully. Oct 9 01:07:32.006226 kubelet[2651]: E1009 01:07:32.006120 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:31.963 [INFO][4361] k8s.go 608: Cleaning up netns ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:31.963 [INFO][4361] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" iface="eth0" netns="/var/run/netns/cni-b28e7ddc-cb92-2a9d-c3dd-92e582fa31ec" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:31.964 [INFO][4361] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" iface="eth0" netns="/var/run/netns/cni-b28e7ddc-cb92-2a9d-c3dd-92e582fa31ec" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:31.964 [INFO][4361] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" iface="eth0" netns="/var/run/netns/cni-b28e7ddc-cb92-2a9d-c3dd-92e582fa31ec" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:31.964 [INFO][4361] k8s.go 615: Releasing IP address(es) ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:31.964 [INFO][4361] utils.go 188: Calico CNI releasing IP address ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:31.987 [INFO][4385] ipam_plugin.go 417: Releasing address using handleID ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" HandleID="k8s-pod-network.6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:31.987 [INFO][4385] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:31.987 [INFO][4385] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:32.062 [WARNING][4385] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" HandleID="k8s-pod-network.6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:32.063 [INFO][4385] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" HandleID="k8s-pod-network.6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:32.064 [INFO][4385] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:32.069550 containerd[1477]: 2024-10-09 01:07:32.066 [INFO][4361] k8s.go 621: Teardown processing complete. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:32.069976 containerd[1477]: time="2024-10-09T01:07:32.069784904Z" level=info msg="TearDown network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\" successfully" Oct 9 01:07:32.069976 containerd[1477]: time="2024-10-09T01:07:32.069817014Z" level=info msg="StopPodSandbox for \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\" returns successfully" Oct 9 01:07:32.070444 containerd[1477]: time="2024-10-09T01:07:32.070418954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597569b566-v8fgc,Uid:53d2281e-817c-4a93-8bb3-9fd28be2e647,Namespace:calico-system,Attempt:1,}" Oct 9 01:07:32.072455 systemd[1]: run-netns-cni\x2db28e7ddc\x2dcb92\x2d2a9d\x2dc3dd\x2d92e582fa31ec.mount: Deactivated successfully. Oct 9 01:07:32.285744 systemd-networkd[1404]: cali26f241bcffd: Link UP Oct 9 01:07:32.287126 systemd-networkd[1404]: cali26f241bcffd: Gained carrier Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.219 [INFO][4393] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0 coredns-7db6d8ff4d- kube-system ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b 877 0 2024-10-09 01:06:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-8wbhx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26f241bcffd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8wbhx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8wbhx-" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.220 [INFO][4393] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8wbhx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.248 [INFO][4420] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" HandleID="k8s-pod-network.b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.258 [INFO][4420] ipam_plugin.go 270: Auto assigning IP ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" HandleID="k8s-pod-network.b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006a7de0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-8wbhx", "timestamp":"2024-10-09 01:07:32.248749384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.258 [INFO][4420] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.258 [INFO][4420] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.258 [INFO][4420] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.260 [INFO][4420] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" host="localhost" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.262 [INFO][4420] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.266 [INFO][4420] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.268 [INFO][4420] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.270 [INFO][4420] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.270 [INFO][4420] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" host="localhost" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.271 [INFO][4420] ipam.go 1685: Creating new handle: k8s-pod-network.b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5 Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.274 [INFO][4420] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" host="localhost" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.279 [INFO][4420] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" host="localhost" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.279 [INFO][4420] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" host="localhost" Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.279 [INFO][4420] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:32.301475 containerd[1477]: 2024-10-09 01:07:32.279 [INFO][4420] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" HandleID="k8s-pod-network.b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:32.302887 containerd[1477]: 2024-10-09 01:07:32.281 [INFO][4393] k8s.go 386: Populated endpoint ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8wbhx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-8wbhx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26f241bcffd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:32.302887 containerd[1477]: 2024-10-09 01:07:32.282 [INFO][4393] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8wbhx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:32.302887 containerd[1477]: 2024-10-09 01:07:32.282 [INFO][4393] dataplane_linux.go 68: Setting the host side veth name to cali26f241bcffd ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8wbhx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:32.302887 containerd[1477]: 2024-10-09 01:07:32.287 [INFO][4393] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8wbhx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:32.302887 containerd[1477]: 2024-10-09 01:07:32.288 [INFO][4393] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8wbhx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5", Pod:"coredns-7db6d8ff4d-8wbhx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26f241bcffd", MAC:"aa:3b:dc:50:ea:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:32.302887 containerd[1477]: 2024-10-09 01:07:32.297 [INFO][4393] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8wbhx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:32.323001 systemd-networkd[1404]: calia1cd38a3781: Link UP Oct 9 01:07:32.323904 systemd-networkd[1404]: calia1cd38a3781: Gained carrier Oct 9 01:07:32.335838 containerd[1477]: time="2024-10-09T01:07:32.335484167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:32.335838 containerd[1477]: time="2024-10-09T01:07:32.335561432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:32.335838 containerd[1477]: time="2024-10-09T01:07:32.335572723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:32.335838 containerd[1477]: time="2024-10-09T01:07:32.335660498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.225 [INFO][4403] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0 calico-kube-controllers-597569b566- calico-system 53d2281e-817c-4a93-8bb3-9fd28be2e647 878 0 2024-10-09 01:07:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:597569b566 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-597569b566-v8fgc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia1cd38a3781 [] []}} ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Namespace="calico-system" Pod="calico-kube-controllers-597569b566-v8fgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.225 [INFO][4403] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Namespace="calico-system" Pod="calico-kube-controllers-597569b566-v8fgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.253 [INFO][4425] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" HandleID="k8s-pod-network.8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.261 [INFO][4425] ipam_plugin.go 270: Auto assigning IP ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" HandleID="k8s-pod-network.8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000617ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-597569b566-v8fgc", "timestamp":"2024-10-09 01:07:32.25379743 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.261 [INFO][4425] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.279 [INFO][4425] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.279 [INFO][4425] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.281 [INFO][4425] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" host="localhost" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.289 [INFO][4425] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.298 [INFO][4425] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.301 [INFO][4425] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.304 [INFO][4425] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.304 [INFO][4425] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" host="localhost" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.307 [INFO][4425] ipam.go 1685: Creating new handle: k8s-pod-network.8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38 Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.311 [INFO][4425] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" host="localhost" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.317 [INFO][4425] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" host="localhost" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.317 [INFO][4425] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" host="localhost" Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.317 [INFO][4425] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:32.337884 containerd[1477]: 2024-10-09 01:07:32.318 [INFO][4425] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" HandleID="k8s-pod-network.8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.338658 containerd[1477]: 2024-10-09 01:07:32.320 [INFO][4403] k8s.go 386: Populated endpoint ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Namespace="calico-system" Pod="calico-kube-controllers-597569b566-v8fgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0", GenerateName:"calico-kube-controllers-597569b566-", Namespace:"calico-system", SelfLink:"", UID:"53d2281e-817c-4a93-8bb3-9fd28be2e647", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597569b566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-597569b566-v8fgc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1cd38a3781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:32.338658 containerd[1477]: 2024-10-09 01:07:32.320 [INFO][4403] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Namespace="calico-system" Pod="calico-kube-controllers-597569b566-v8fgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.338658 containerd[1477]: 2024-10-09 01:07:32.320 [INFO][4403] dataplane_linux.go 68: Setting the host side veth name to calia1cd38a3781 ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Namespace="calico-system" Pod="calico-kube-controllers-597569b566-v8fgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.338658 containerd[1477]: 2024-10-09 01:07:32.323 [INFO][4403] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Namespace="calico-system" Pod="calico-kube-controllers-597569b566-v8fgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.338658 containerd[1477]: 2024-10-09 01:07:32.324 [INFO][4403] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Namespace="calico-system" Pod="calico-kube-controllers-597569b566-v8fgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0", GenerateName:"calico-kube-controllers-597569b566-", Namespace:"calico-system", SelfLink:"", UID:"53d2281e-817c-4a93-8bb3-9fd28be2e647", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597569b566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38", Pod:"calico-kube-controllers-597569b566-v8fgc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1cd38a3781", MAC:"02:33:03:8a:84:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:32.338658 containerd[1477]: 2024-10-09 01:07:32.333 [INFO][4403] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38" Namespace="calico-system" Pod="calico-kube-controllers-597569b566-v8fgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:32.354515 systemd[1]: Started cri-containerd-b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5.scope - libcontainer container b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5. Oct 9 01:07:32.368479 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:07:32.371025 containerd[1477]: time="2024-10-09T01:07:32.368557249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:32.371025 containerd[1477]: time="2024-10-09T01:07:32.368621039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:32.371025 containerd[1477]: time="2024-10-09T01:07:32.368631078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:32.371025 containerd[1477]: time="2024-10-09T01:07:32.368706058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:32.393991 systemd[1]: Started cri-containerd-8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38.scope - libcontainer container 8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38. Oct 9 01:07:32.399273 containerd[1477]: time="2024-10-09T01:07:32.399221731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8wbhx,Uid:ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b,Namespace:kube-system,Attempt:1,} returns sandbox id \"b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5\"" Oct 9 01:07:32.399864 kubelet[2651]: E1009 01:07:32.399844 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:32.402385 containerd[1477]: time="2024-10-09T01:07:32.402226702Z" level=info msg="CreateContainer within sandbox \"b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:07:32.411983 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:07:32.434504 containerd[1477]: time="2024-10-09T01:07:32.434436354Z" level=info msg="CreateContainer within sandbox \"b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae0b500cad5c5008a1f5bcaf696a345e4aa7dbe4ca7a3623ff3cfa6d23d2f7fe\"" Oct 9 01:07:32.437555 containerd[1477]: time="2024-10-09T01:07:32.437516375Z" level=info msg="StartContainer for \"ae0b500cad5c5008a1f5bcaf696a345e4aa7dbe4ca7a3623ff3cfa6d23d2f7fe\"" Oct 9 01:07:32.455008 containerd[1477]: time="2024-10-09T01:07:32.454942156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597569b566-v8fgc,Uid:53d2281e-817c-4a93-8bb3-9fd28be2e647,Namespace:calico-system,Attempt:1,} returns sandbox id \"8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38\"" Oct 9 01:07:32.470236 systemd[1]: Started cri-containerd-ae0b500cad5c5008a1f5bcaf696a345e4aa7dbe4ca7a3623ff3cfa6d23d2f7fe.scope - libcontainer container ae0b500cad5c5008a1f5bcaf696a345e4aa7dbe4ca7a3623ff3cfa6d23d2f7fe. Oct 9 01:07:32.501440 containerd[1477]: time="2024-10-09T01:07:32.501392129Z" level=info msg="StartContainer for \"ae0b500cad5c5008a1f5bcaf696a345e4aa7dbe4ca7a3623ff3cfa6d23d2f7fe\" returns successfully" Oct 9 01:07:32.841727 containerd[1477]: time="2024-10-09T01:07:32.841656070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:32.842348 containerd[1477]: time="2024-10-09T01:07:32.842306150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 01:07:32.843396 containerd[1477]: time="2024-10-09T01:07:32.843365198Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:32.859185 containerd[1477]: time="2024-10-09T01:07:32.859145961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:32.859810 containerd[1477]: time="2024-10-09T01:07:32.859767227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.848450939s" Oct 9 01:07:32.859839 containerd[1477]: time="2024-10-09T01:07:32.859809035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 01:07:32.860754 containerd[1477]: time="2024-10-09T01:07:32.860654813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:07:32.861923 containerd[1477]: time="2024-10-09T01:07:32.861892116Z" level=info msg="CreateContainer within sandbox \"2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:07:32.876359 containerd[1477]: time="2024-10-09T01:07:32.876315741Z" level=info msg="CreateContainer within sandbox \"2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"91457b1970e1ec0a5b7dcdf911bba8a5553628867f2d20e59c86cb3dfa2ccbb9\"" Oct 9 01:07:32.876751 containerd[1477]: time="2024-10-09T01:07:32.876727995Z" level=info msg="StartContainer for \"91457b1970e1ec0a5b7dcdf911bba8a5553628867f2d20e59c86cb3dfa2ccbb9\"" Oct 9 01:07:32.894352 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:46012.service - OpenSSH per-connection server daemon (10.0.0.1:46012). Oct 9 01:07:32.900778 systemd[1]: Started cri-containerd-91457b1970e1ec0a5b7dcdf911bba8a5553628867f2d20e59c86cb3dfa2ccbb9.scope - libcontainer container 91457b1970e1ec0a5b7dcdf911bba8a5553628867f2d20e59c86cb3dfa2ccbb9. Oct 9 01:07:32.937369 sshd[4608]: Accepted publickey for core from 10.0.0.1 port 46012 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:32.940193 sshd[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:33.002555 containerd[1477]: time="2024-10-09T01:07:33.002496780Z" level=info msg="StartContainer for \"91457b1970e1ec0a5b7dcdf911bba8a5553628867f2d20e59c86cb3dfa2ccbb9\" returns successfully" Oct 9 01:07:33.004630 systemd-logind[1451]: New session 13 of user core. Oct 9 01:07:33.009271 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:07:33.014988 kubelet[2651]: E1009 01:07:33.014957 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:33.031407 kubelet[2651]: I1009 01:07:33.031053 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nqtxw" podStartSLOduration=23.510658701 podStartE2EDuration="27.031020159s" podCreationTimestamp="2024-10-09 01:07:06 +0000 UTC" firstStartedPulling="2024-10-09 01:07:29.340149876 +0000 UTC m=+43.689303249" lastFinishedPulling="2024-10-09 01:07:32.860511334 +0000 UTC m=+47.209664707" observedRunningTime="2024-10-09 01:07:33.022698769 +0000 UTC m=+47.371852142" watchObservedRunningTime="2024-10-09 01:07:33.031020159 +0000 UTC m=+47.380173532" Oct 9 01:07:33.142908 sshd[4608]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:33.148549 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:46012.service: Deactivated successfully. Oct 9 01:07:33.151075 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:07:33.151706 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:07:33.152562 systemd-logind[1451]: Removed session 13. Oct 9 01:07:33.766349 systemd-networkd[1404]: cali26f241bcffd: Gained IPv6LL Oct 9 01:07:33.795054 kubelet[2651]: I1009 01:07:33.795011 2651 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:07:33.795054 kubelet[2651]: I1009 01:07:33.795055 2651 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:07:34.017271 kubelet[2651]: E1009 01:07:34.017125 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:34.023294 systemd-networkd[1404]: calia1cd38a3781: Gained IPv6LL Oct 9 01:07:35.019055 kubelet[2651]: E1009 01:07:35.019017 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:35.040379 containerd[1477]: time="2024-10-09T01:07:35.040321343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:35.041096 containerd[1477]: time="2024-10-09T01:07:35.041028300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 01:07:35.042105 containerd[1477]: time="2024-10-09T01:07:35.042055938Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:35.044364 containerd[1477]: time="2024-10-09T01:07:35.044338253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:35.045081 containerd[1477]: time="2024-10-09T01:07:35.045036723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.184348588s" Oct 9 01:07:35.045127 containerd[1477]: time="2024-10-09T01:07:35.045086727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 01:07:35.053635 containerd[1477]: time="2024-10-09T01:07:35.053589816Z" level=info msg="CreateContainer within sandbox \"8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:07:35.070183 containerd[1477]: time="2024-10-09T01:07:35.070132002Z" level=info msg="CreateContainer within sandbox \"8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"247d4c43eb7b5de92a346692a80b3f93d71be3aaf3655f92c0e9cb37c0f5450e\"" Oct 9 01:07:35.071164 containerd[1477]: time="2024-10-09T01:07:35.070922145Z" level=info msg="StartContainer for \"247d4c43eb7b5de92a346692a80b3f93d71be3aaf3655f92c0e9cb37c0f5450e\"" Oct 9 01:07:35.104278 systemd[1]: Started cri-containerd-247d4c43eb7b5de92a346692a80b3f93d71be3aaf3655f92c0e9cb37c0f5450e.scope - libcontainer container 247d4c43eb7b5de92a346692a80b3f93d71be3aaf3655f92c0e9cb37c0f5450e. Oct 9 01:07:35.304781 containerd[1477]: time="2024-10-09T01:07:35.304526870Z" level=info msg="StartContainer for \"247d4c43eb7b5de92a346692a80b3f93d71be3aaf3655f92c0e9cb37c0f5450e\" returns successfully" Oct 9 01:07:35.439008 kubelet[2651]: I1009 01:07:35.438921 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:07:35.439929 kubelet[2651]: E1009 01:07:35.439854 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:36.035135 kubelet[2651]: I1009 01:07:36.034803 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-597569b566-v8fgc" podStartSLOduration=27.44516211 podStartE2EDuration="30.03478417s" podCreationTimestamp="2024-10-09 01:07:06 +0000 UTC" firstStartedPulling="2024-10-09 01:07:32.45638251 +0000 UTC m=+46.805535883" lastFinishedPulling="2024-10-09 01:07:35.04600457 +0000 UTC m=+49.395157943" observedRunningTime="2024-10-09 01:07:36.034021318 +0000 UTC m=+50.383174691" watchObservedRunningTime="2024-10-09 01:07:36.03478417 +0000 UTC m=+50.383937543" Oct 9 01:07:36.035135 kubelet[2651]: I1009 01:07:36.035084 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8wbhx" podStartSLOduration=37.035078452 podStartE2EDuration="37.035078452s" podCreationTimestamp="2024-10-09 01:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:07:33.031317537 +0000 UTC m=+47.380470900" watchObservedRunningTime="2024-10-09 01:07:36.035078452 +0000 UTC m=+50.384231825" Oct 9 01:07:38.155904 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:50894.service - OpenSSH per-connection server daemon (10.0.0.1:50894). Oct 9 01:07:38.197770 sshd[4782]: Accepted publickey for core from 10.0.0.1 port 50894 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:38.199764 sshd[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:38.204655 systemd-logind[1451]: New session 14 of user core. Oct 9 01:07:38.211308 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:07:38.341335 sshd[4782]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:38.345834 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:50894.service: Deactivated successfully. Oct 9 01:07:38.347906 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:07:38.348615 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:07:38.349668 systemd-logind[1451]: Removed session 14. Oct 9 01:07:43.352310 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:50906.service - OpenSSH per-connection server daemon (10.0.0.1:50906). Oct 9 01:07:43.389723 sshd[4804]: Accepted publickey for core from 10.0.0.1 port 50906 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:43.391327 sshd[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:43.395580 systemd-logind[1451]: New session 15 of user core. Oct 9 01:07:43.405192 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:07:43.516888 sshd[4804]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:43.521696 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:50906.service: Deactivated successfully. Oct 9 01:07:43.524043 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:07:43.524754 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:07:43.525883 systemd-logind[1451]: Removed session 15. Oct 9 01:07:45.718569 containerd[1477]: time="2024-10-09T01:07:45.718518778Z" level=info msg="StopPodSandbox for \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\"" Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.752 [WARNING][4838] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0", GenerateName:"calico-kube-controllers-597569b566-", Namespace:"calico-system", SelfLink:"", UID:"53d2281e-817c-4a93-8bb3-9fd28be2e647", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597569b566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38", Pod:"calico-kube-controllers-597569b566-v8fgc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1cd38a3781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.753 [INFO][4838] k8s.go 608: Cleaning up netns ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.753 [INFO][4838] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" iface="eth0" netns="" Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.753 [INFO][4838] k8s.go 615: Releasing IP address(es) ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.753 [INFO][4838] utils.go 188: Calico CNI releasing IP address ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.772 [INFO][4847] ipam_plugin.go 417: Releasing address using handleID ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" HandleID="k8s-pod-network.6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.772 [INFO][4847] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.772 [INFO][4847] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.777 [WARNING][4847] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" HandleID="k8s-pod-network.6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.777 [INFO][4847] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" HandleID="k8s-pod-network.6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.780 [INFO][4847] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:45.785452 containerd[1477]: 2024-10-09 01:07:45.782 [INFO][4838] k8s.go 621: Teardown processing complete. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:45.785452 containerd[1477]: time="2024-10-09T01:07:45.785442114Z" level=info msg="TearDown network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\" successfully" Oct 9 01:07:45.786192 containerd[1477]: time="2024-10-09T01:07:45.785463585Z" level=info msg="StopPodSandbox for \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\" returns successfully" Oct 9 01:07:45.786192 containerd[1477]: time="2024-10-09T01:07:45.785859669Z" level=info msg="RemovePodSandbox for \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\"" Oct 9 01:07:45.795349 containerd[1477]: time="2024-10-09T01:07:45.795314376Z" level=info msg="Forcibly stopping sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\"" Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.826 [WARNING][4870] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0", GenerateName:"calico-kube-controllers-597569b566-", Namespace:"calico-system", SelfLink:"", UID:"53d2281e-817c-4a93-8bb3-9fd28be2e647", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597569b566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b2e0fecd54d62f265c546b6da45a3b83c890e71b441dd876060ec8fda74db38", Pod:"calico-kube-controllers-597569b566-v8fgc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1cd38a3781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.827 [INFO][4870] k8s.go 608: Cleaning up netns ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.827 [INFO][4870] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" iface="eth0" netns="" Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.827 [INFO][4870] k8s.go 615: Releasing IP address(es) ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.827 [INFO][4870] utils.go 188: Calico CNI releasing IP address ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.849 [INFO][4878] ipam_plugin.go 417: Releasing address using handleID ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" HandleID="k8s-pod-network.6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.849 [INFO][4878] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.849 [INFO][4878] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.855 [WARNING][4878] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" HandleID="k8s-pod-network.6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.855 [INFO][4878] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" HandleID="k8s-pod-network.6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Workload="localhost-k8s-calico--kube--controllers--597569b566--v8fgc-eth0" Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.856 [INFO][4878] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:45.861519 containerd[1477]: 2024-10-09 01:07:45.859 [INFO][4870] k8s.go 621: Teardown processing complete. ContainerID="6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45" Oct 9 01:07:45.862157 containerd[1477]: time="2024-10-09T01:07:45.861569970Z" level=info msg="TearDown network for sandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\" successfully" Oct 9 01:07:45.886237 containerd[1477]: time="2024-10-09T01:07:45.886180954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:45.886401 containerd[1477]: time="2024-10-09T01:07:45.886262056Z" level=info msg="RemovePodSandbox \"6adbf06782aab5f05b8e5ab966e7493338532b4ff2db7d61c1e07734ec328b45\" returns successfully" Oct 9 01:07:45.886964 containerd[1477]: time="2024-10-09T01:07:45.886807929Z" level=info msg="StopPodSandbox for \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\"" Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.923 [WARNING][4901] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5", Pod:"coredns-7db6d8ff4d-8wbhx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26f241bcffd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.923 [INFO][4901] k8s.go 608: Cleaning up netns ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.923 [INFO][4901] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" iface="eth0" netns="" Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.923 [INFO][4901] k8s.go 615: Releasing IP address(es) ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.923 [INFO][4901] utils.go 188: Calico CNI releasing IP address ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.944 [INFO][4910] ipam_plugin.go 417: Releasing address using handleID ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" HandleID="k8s-pod-network.2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.944 [INFO][4910] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.944 [INFO][4910] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.949 [WARNING][4910] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" HandleID="k8s-pod-network.2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.949 [INFO][4910] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" HandleID="k8s-pod-network.2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.951 [INFO][4910] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:45.956442 containerd[1477]: 2024-10-09 01:07:45.953 [INFO][4901] k8s.go 621: Teardown processing complete. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:45.956879 containerd[1477]: time="2024-10-09T01:07:45.956485243Z" level=info msg="TearDown network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\" successfully" Oct 9 01:07:45.956879 containerd[1477]: time="2024-10-09T01:07:45.956510881Z" level=info msg="StopPodSandbox for \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\" returns successfully" Oct 9 01:07:45.957125 containerd[1477]: time="2024-10-09T01:07:45.957054821Z" level=info msg="RemovePodSandbox for \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\"" Oct 9 01:07:45.957167 containerd[1477]: time="2024-10-09T01:07:45.957135322Z" level=info msg="Forcibly stopping sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\"" Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:45.990 [WARNING][4933] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ac9b6f9b-e14c-4e2f-b736-e1f48d1c156b", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7fd8fc299380f542afdcacd8b2154eecc866dcb192c0e46da956dd153cd40b5", Pod:"coredns-7db6d8ff4d-8wbhx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26f241bcffd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:45.991 [INFO][4933] k8s.go 608: Cleaning up netns ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:45.991 [INFO][4933] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" iface="eth0" netns="" Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:45.991 [INFO][4933] k8s.go 615: Releasing IP address(es) ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:45.991 [INFO][4933] utils.go 188: Calico CNI releasing IP address ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:46.011 [INFO][4941] ipam_plugin.go 417: Releasing address using handleID ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" HandleID="k8s-pod-network.2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:46.012 [INFO][4941] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:46.012 [INFO][4941] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:46.016 [WARNING][4941] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" HandleID="k8s-pod-network.2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:46.016 [INFO][4941] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" HandleID="k8s-pod-network.2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Workload="localhost-k8s-coredns--7db6d8ff4d--8wbhx-eth0" Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:46.018 [INFO][4941] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:46.022927 containerd[1477]: 2024-10-09 01:07:46.020 [INFO][4933] k8s.go 621: Teardown processing complete. ContainerID="2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e" Oct 9 01:07:46.022927 containerd[1477]: time="2024-10-09T01:07:46.022886128Z" level=info msg="TearDown network for sandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\" successfully" Oct 9 01:07:46.037341 containerd[1477]: time="2024-10-09T01:07:46.037307854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:46.037404 containerd[1477]: time="2024-10-09T01:07:46.037360142Z" level=info msg="RemovePodSandbox \"2f049b8525b203febaea172f628c874351443de507c5f7e2b70d5f9c87b5b01e\" returns successfully" Oct 9 01:07:46.037926 containerd[1477]: time="2024-10-09T01:07:46.037892732Z" level=info msg="StopPodSandbox for \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\"" Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.076 [WARNING][4963] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nqtxw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3fea10d-7895-4144-8231-a605fca41c0d", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada", Pod:"csi-node-driver-nqtxw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib1347abfe77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.076 [INFO][4963] k8s.go 608: Cleaning up netns ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.076 [INFO][4963] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" iface="eth0" netns="" Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.076 [INFO][4963] k8s.go 615: Releasing IP address(es) ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.076 [INFO][4963] utils.go 188: Calico CNI releasing IP address ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.095 [INFO][4971] ipam_plugin.go 417: Releasing address using handleID ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" HandleID="k8s-pod-network.5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.095 [INFO][4971] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.095 [INFO][4971] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.100 [WARNING][4971] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" HandleID="k8s-pod-network.5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.100 [INFO][4971] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" HandleID="k8s-pod-network.5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.101 [INFO][4971] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:46.106661 containerd[1477]: 2024-10-09 01:07:46.103 [INFO][4963] k8s.go 621: Teardown processing complete. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:46.107256 containerd[1477]: time="2024-10-09T01:07:46.106687656Z" level=info msg="TearDown network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\" successfully" Oct 9 01:07:46.107256 containerd[1477]: time="2024-10-09T01:07:46.106721570Z" level=info msg="StopPodSandbox for \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\" returns successfully" Oct 9 01:07:46.107256 containerd[1477]: time="2024-10-09T01:07:46.107242808Z" level=info msg="RemovePodSandbox for \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\"" Oct 9 01:07:46.107359 containerd[1477]: time="2024-10-09T01:07:46.107272193Z" level=info msg="Forcibly stopping sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\"" Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.140 [WARNING][4993] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nqtxw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3fea10d-7895-4144-8231-a605fca41c0d", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e2af7314fb1e7d2e9f9e0486e5b61cce92cfe2e27cbc4fb8e58282e05d48ada", Pod:"csi-node-driver-nqtxw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib1347abfe77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.140 [INFO][4993] k8s.go 608: Cleaning up netns ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.140 [INFO][4993] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" iface="eth0" netns="" Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.140 [INFO][4993] k8s.go 615: Releasing IP address(es) ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.140 [INFO][4993] utils.go 188: Calico CNI releasing IP address ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.160 [INFO][5001] ipam_plugin.go 417: Releasing address using handleID ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" HandleID="k8s-pod-network.5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.161 [INFO][5001] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.161 [INFO][5001] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.167 [WARNING][5001] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" HandleID="k8s-pod-network.5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.167 [INFO][5001] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" HandleID="k8s-pod-network.5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Workload="localhost-k8s-csi--node--driver--nqtxw-eth0" Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.168 [INFO][5001] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:46.172969 containerd[1477]: 2024-10-09 01:07:46.170 [INFO][4993] k8s.go 621: Teardown processing complete. ContainerID="5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42" Oct 9 01:07:46.173433 containerd[1477]: time="2024-10-09T01:07:46.173009090Z" level=info msg="TearDown network for sandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\" successfully" Oct 9 01:07:46.176830 containerd[1477]: time="2024-10-09T01:07:46.176784046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:46.176888 containerd[1477]: time="2024-10-09T01:07:46.176831906Z" level=info msg="RemovePodSandbox \"5fcee1e90c3b3bdc038ca45b3c23b191b0b7bbaaa7053f854906be54eca7db42\" returns successfully" Oct 9 01:07:46.177325 containerd[1477]: time="2024-10-09T01:07:46.177277465Z" level=info msg="StopPodSandbox for \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\"" Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.210 [WARNING][5024] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb07e656-ab01-414e-908e-42ef81b5409e", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60", Pod:"coredns-7db6d8ff4d-tlbzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7272d19aad5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.210 [INFO][5024] k8s.go 608: Cleaning up netns ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.210 [INFO][5024] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" iface="eth0" netns="" Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.211 [INFO][5024] k8s.go 615: Releasing IP address(es) ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.211 [INFO][5024] utils.go 188: Calico CNI releasing IP address ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.232 [INFO][5032] ipam_plugin.go 417: Releasing address using handleID ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" HandleID="k8s-pod-network.f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.232 [INFO][5032] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.232 [INFO][5032] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.237 [WARNING][5032] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" HandleID="k8s-pod-network.f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.237 [INFO][5032] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" HandleID="k8s-pod-network.f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.238 [INFO][5032] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:46.243042 containerd[1477]: 2024-10-09 01:07:46.240 [INFO][5024] k8s.go 621: Teardown processing complete. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:46.243524 containerd[1477]: time="2024-10-09T01:07:46.243111465Z" level=info msg="TearDown network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\" successfully" Oct 9 01:07:46.243524 containerd[1477]: time="2024-10-09T01:07:46.243137674Z" level=info msg="StopPodSandbox for \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\" returns successfully" Oct 9 01:07:46.243694 containerd[1477]: time="2024-10-09T01:07:46.243654657Z" level=info msg="RemovePodSandbox for \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\"" Oct 9 01:07:46.243728 containerd[1477]: time="2024-10-09T01:07:46.243698731Z" level=info msg="Forcibly stopping sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\"" Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.277 [WARNING][5055] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb07e656-ab01-414e-908e-42ef81b5409e", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94d44525ef483d2a0d66e49e42c5df3bb4ac66d28f985be59b00fec680bdcb60", Pod:"coredns-7db6d8ff4d-tlbzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7272d19aad5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.277 [INFO][5055] k8s.go 608: Cleaning up netns ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.277 [INFO][5055] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" iface="eth0" netns="" Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.277 [INFO][5055] k8s.go 615: Releasing IP address(es) ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.277 [INFO][5055] utils.go 188: Calico CNI releasing IP address ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.295 [INFO][5063] ipam_plugin.go 417: Releasing address using handleID ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" HandleID="k8s-pod-network.f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.296 [INFO][5063] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.296 [INFO][5063] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.302 [WARNING][5063] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" HandleID="k8s-pod-network.f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.302 [INFO][5063] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" HandleID="k8s-pod-network.f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Workload="localhost-k8s-coredns--7db6d8ff4d--tlbzf-eth0" Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.303 [INFO][5063] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:46.308800 containerd[1477]: 2024-10-09 01:07:46.306 [INFO][5055] k8s.go 621: Teardown processing complete. ContainerID="f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8" Oct 9 01:07:46.308800 containerd[1477]: time="2024-10-09T01:07:46.308760147Z" level=info msg="TearDown network for sandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\" successfully" Oct 9 01:07:46.313202 containerd[1477]: time="2024-10-09T01:07:46.313152895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:46.313372 containerd[1477]: time="2024-10-09T01:07:46.313227796Z" level=info msg="RemovePodSandbox \"f47abd03da0c3694babfd7fc0b6c7d9c159d424fc508a18af2b66efe0ac10cc8\" returns successfully" Oct 9 01:07:47.209962 systemd[1]: run-containerd-runc-k8s.io-247d4c43eb7b5de92a346692a80b3f93d71be3aaf3655f92c0e9cb37c0f5450e-runc.Bie1Rf.mount: Deactivated successfully. Oct 9 01:07:48.535350 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:42072.service - OpenSSH per-connection server daemon (10.0.0.1:42072). Oct 9 01:07:48.575935 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 42072 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:48.577558 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:48.581346 systemd-logind[1451]: New session 16 of user core. Oct 9 01:07:48.591209 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:07:48.714323 sshd[5092]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:48.724213 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:42072.service: Deactivated successfully. Oct 9 01:07:48.726415 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:07:48.727951 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:07:48.737400 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:42086.service - OpenSSH per-connection server daemon (10.0.0.1:42086). Oct 9 01:07:48.738377 systemd-logind[1451]: Removed session 16. Oct 9 01:07:48.773589 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 42086 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:48.775405 sshd[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:48.779940 systemd-logind[1451]: New session 17 of user core. Oct 9 01:07:48.792191 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:07:49.046751 sshd[5108]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:49.057762 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:42086.service: Deactivated successfully. Oct 9 01:07:49.059684 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:07:49.061463 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:07:49.062876 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:42088.service - OpenSSH per-connection server daemon (10.0.0.1:42088). Oct 9 01:07:49.063687 systemd-logind[1451]: Removed session 17. Oct 9 01:07:49.112997 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 42088 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:49.114512 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:49.119372 systemd-logind[1451]: New session 18 of user core. Oct 9 01:07:49.130331 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:07:50.376340 sshd[5120]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:50.390187 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:42088.service: Deactivated successfully. Oct 9 01:07:50.394785 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:07:50.396152 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:07:50.403365 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:42098.service - OpenSSH per-connection server daemon (10.0.0.1:42098). Oct 9 01:07:50.404137 systemd-logind[1451]: Removed session 18. Oct 9 01:07:50.445003 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 42098 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:50.446788 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:50.451678 systemd-logind[1451]: New session 19 of user core. Oct 9 01:07:50.463257 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:07:50.695093 sshd[5140]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:50.706036 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:42098.service: Deactivated successfully. Oct 9 01:07:50.708452 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:07:50.710408 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:07:50.721376 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:42110.service - OpenSSH per-connection server daemon (10.0.0.1:42110). Oct 9 01:07:50.722685 systemd-logind[1451]: Removed session 19. Oct 9 01:07:50.756207 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 42110 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:50.758139 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:50.762798 systemd-logind[1451]: New session 20 of user core. Oct 9 01:07:50.771269 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:07:50.891994 sshd[5152]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:50.896316 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:42110.service: Deactivated successfully. Oct 9 01:07:50.898764 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:07:50.899530 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:07:50.900905 systemd-logind[1451]: Removed session 20. Oct 9 01:07:55.904470 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:42528.service - OpenSSH per-connection server daemon (10.0.0.1:42528). Oct 9 01:07:55.945849 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 42528 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:07:55.948018 sshd[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:55.952509 systemd-logind[1451]: New session 21 of user core. Oct 9 01:07:55.962215 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 01:07:56.079956 sshd[5174]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:56.084427 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:42528.service: Deactivated successfully. Oct 9 01:07:56.086682 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 01:07:56.087435 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Oct 9 01:07:56.088315 systemd-logind[1451]: Removed session 21. Oct 9 01:08:01.093383 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:42538.service - OpenSSH per-connection server daemon (10.0.0.1:42538). Oct 9 01:08:01.133455 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 42538 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:08:01.135309 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:01.139921 systemd-logind[1451]: New session 22 of user core. Oct 9 01:08:01.155213 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 01:08:01.270367 sshd[5195]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:01.274257 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:42538.service: Deactivated successfully. Oct 9 01:08:01.276247 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 01:08:01.276939 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Oct 9 01:08:01.277993 systemd-logind[1451]: Removed session 22. Oct 9 01:08:01.730124 kubelet[2651]: E1009 01:08:01.730039 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:04.729694 kubelet[2651]: E1009 01:08:04.729646 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:05.515234 kubelet[2651]: E1009 01:08:05.515194 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:06.282562 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:50302.service - OpenSSH per-connection server daemon (10.0.0.1:50302). Oct 9 01:08:06.320239 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 50302 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:08:06.322036 sshd[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:06.326150 systemd-logind[1451]: New session 23 of user core. Oct 9 01:08:06.334196 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 01:08:06.449632 sshd[5245]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:06.453638 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:50302.service: Deactivated successfully. Oct 9 01:08:06.455721 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 01:08:06.456636 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Oct 9 01:08:06.457929 systemd-logind[1451]: Removed session 23. Oct 9 01:08:09.729870 kubelet[2651]: E1009 01:08:09.729824 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:11.090723 kubelet[2651]: I1009 01:08:11.090634 2651 topology_manager.go:215] "Topology Admit Handler" podUID="f3dd13d7-f12d-4e34-9f12-7e80961b82ec" podNamespace="calico-apiserver" podName="calico-apiserver-64fcb885c7-56rv8" Oct 9 01:08:11.101252 systemd[1]: Created slice kubepods-besteffort-podf3dd13d7_f12d_4e34_9f12_7e80961b82ec.slice - libcontainer container kubepods-besteffort-podf3dd13d7_f12d_4e34_9f12_7e80961b82ec.slice. Oct 9 01:08:11.161642 kubelet[2651]: I1009 01:08:11.161581 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f3dd13d7-f12d-4e34-9f12-7e80961b82ec-calico-apiserver-certs\") pod \"calico-apiserver-64fcb885c7-56rv8\" (UID: \"f3dd13d7-f12d-4e34-9f12-7e80961b82ec\") " pod="calico-apiserver/calico-apiserver-64fcb885c7-56rv8" Oct 9 01:08:11.161642 kubelet[2651]: I1009 01:08:11.161625 2651 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ljgc\" (UniqueName: \"kubernetes.io/projected/f3dd13d7-f12d-4e34-9f12-7e80961b82ec-kube-api-access-7ljgc\") pod \"calico-apiserver-64fcb885c7-56rv8\" (UID: \"f3dd13d7-f12d-4e34-9f12-7e80961b82ec\") " pod="calico-apiserver/calico-apiserver-64fcb885c7-56rv8" Oct 9 01:08:11.263670 kubelet[2651]: E1009 01:08:11.263617 2651 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:08:11.263881 kubelet[2651]: E1009 01:08:11.263735 2651 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3dd13d7-f12d-4e34-9f12-7e80961b82ec-calico-apiserver-certs podName:f3dd13d7-f12d-4e34-9f12-7e80961b82ec nodeName:}" failed. No retries permitted until 2024-10-09 01:08:11.763694444 +0000 UTC m=+86.112847817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f3dd13d7-f12d-4e34-9f12-7e80961b82ec-calico-apiserver-certs") pod "calico-apiserver-64fcb885c7-56rv8" (UID: "f3dd13d7-f12d-4e34-9f12-7e80961b82ec") : secret "calico-apiserver-certs" not found Oct 9 01:08:11.461567 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:50304.service - OpenSSH per-connection server daemon (10.0.0.1:50304). Oct 9 01:08:11.506090 sshd[5264]: Accepted publickey for core from 10.0.0.1 port 50304 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 01:08:11.508007 sshd[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:11.512543 systemd-logind[1451]: New session 24 of user core. Oct 9 01:08:11.522231 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 01:08:11.643865 sshd[5264]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:11.650454 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:50304.service: Deactivated successfully. Oct 9 01:08:11.652750 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 01:08:11.653392 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Oct 9 01:08:11.654345 systemd-logind[1451]: Removed session 24. Oct 9 01:08:11.765131 kubelet[2651]: E1009 01:08:11.764968 2651 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:08:11.765131 kubelet[2651]: E1009 01:08:11.765043 2651 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3dd13d7-f12d-4e34-9f12-7e80961b82ec-calico-apiserver-certs podName:f3dd13d7-f12d-4e34-9f12-7e80961b82ec nodeName:}" failed. No retries permitted until 2024-10-09 01:08:12.765025681 +0000 UTC m=+87.114179054 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f3dd13d7-f12d-4e34-9f12-7e80961b82ec-calico-apiserver-certs") pod "calico-apiserver-64fcb885c7-56rv8" (UID: "f3dd13d7-f12d-4e34-9f12-7e80961b82ec") : secret "calico-apiserver-certs" not found Oct 9 01:08:12.908904 containerd[1477]: time="2024-10-09T01:08:12.908831784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fcb885c7-56rv8,Uid:f3dd13d7-f12d-4e34-9f12-7e80961b82ec,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:08:13.023395 systemd-networkd[1404]: cali2ed2c604953: Link UP Oct 9 01:08:13.023962 systemd-networkd[1404]: cali2ed2c604953: Gained carrier Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:12.955 [INFO][5301] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0 calico-apiserver-64fcb885c7- calico-apiserver f3dd13d7-f12d-4e34-9f12-7e80961b82ec 1160 0 2024-10-09 01:08:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64fcb885c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64fcb885c7-56rv8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ed2c604953 [] []}} ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Namespace="calico-apiserver" Pod="calico-apiserver-64fcb885c7-56rv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:12.955 [INFO][5301] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Namespace="calico-apiserver" Pod="calico-apiserver-64fcb885c7-56rv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:12.988 [INFO][5312] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" HandleID="k8s-pod-network.414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Workload="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:12.998 [INFO][5312] ipam_plugin.go 270: Auto assigning IP ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" HandleID="k8s-pod-network.414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Workload="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003087e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64fcb885c7-56rv8", "timestamp":"2024-10-09 01:08:12.988712516 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:12.998 [INFO][5312] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:12.998 [INFO][5312] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:12.998 [INFO][5312] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:12.999 [INFO][5312] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" host="localhost" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.002 [INFO][5312] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.005 [INFO][5312] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.006 [INFO][5312] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.008 [INFO][5312] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.008 [INFO][5312] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" host="localhost" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.009 [INFO][5312] ipam.go 1685: Creating new handle: k8s-pod-network.414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62 Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.013 [INFO][5312] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" host="localhost" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.018 [INFO][5312] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" host="localhost" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.018 [INFO][5312] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" host="localhost" Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.018 [INFO][5312] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:08:13.036968 containerd[1477]: 2024-10-09 01:08:13.018 [INFO][5312] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" HandleID="k8s-pod-network.414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Workload="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" Oct 9 01:08:13.037729 containerd[1477]: 2024-10-09 01:08:13.020 [INFO][5301] k8s.go 386: Populated endpoint ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Namespace="calico-apiserver" Pod="calico-apiserver-64fcb885c7-56rv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0", GenerateName:"calico-apiserver-64fcb885c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3dd13d7-f12d-4e34-9f12-7e80961b82ec", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fcb885c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64fcb885c7-56rv8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ed2c604953", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:08:13.037729 containerd[1477]: 2024-10-09 01:08:13.021 [INFO][5301] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Namespace="calico-apiserver" Pod="calico-apiserver-64fcb885c7-56rv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" Oct 9 01:08:13.037729 containerd[1477]: 2024-10-09 01:08:13.021 [INFO][5301] dataplane_linux.go 68: Setting the host side veth name to cali2ed2c604953 ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Namespace="calico-apiserver" Pod="calico-apiserver-64fcb885c7-56rv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" Oct 9 01:08:13.037729 containerd[1477]: 2024-10-09 01:08:13.024 [INFO][5301] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Namespace="calico-apiserver" Pod="calico-apiserver-64fcb885c7-56rv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" Oct 9 01:08:13.037729 containerd[1477]: 2024-10-09 01:08:13.025 [INFO][5301] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Namespace="calico-apiserver" Pod="calico-apiserver-64fcb885c7-56rv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0", GenerateName:"calico-apiserver-64fcb885c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3dd13d7-f12d-4e34-9f12-7e80961b82ec", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fcb885c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62", Pod:"calico-apiserver-64fcb885c7-56rv8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ed2c604953", MAC:"76:c7:b5:57:74:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:08:13.037729 containerd[1477]: 2024-10-09 01:08:13.032 [INFO][5301] k8s.go 500: Wrote updated endpoint to datastore ContainerID="414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62" Namespace="calico-apiserver" Pod="calico-apiserver-64fcb885c7-56rv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--64fcb885c7--56rv8-eth0" Oct 9 01:08:13.059747 containerd[1477]: time="2024-10-09T01:08:13.058905356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:08:13.059747 containerd[1477]: time="2024-10-09T01:08:13.059710938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:08:13.059747 containerd[1477]: time="2024-10-09T01:08:13.059726539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:13.060012 containerd[1477]: time="2024-10-09T01:08:13.059825266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:13.086579 systemd[1]: Started cri-containerd-414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62.scope - libcontainer container 414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62. Oct 9 01:08:13.099735 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:08:13.122243 containerd[1477]: time="2024-10-09T01:08:13.122200512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fcb885c7-56rv8,Uid:f3dd13d7-f12d-4e34-9f12-7e80961b82ec,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"414d01b7532a8cf38e9bfeb61b4bc1f71245d98e36a6115282008bbcb23faa62\"" Oct 9 01:08:13.124030 containerd[1477]: time="2024-10-09T01:08:13.124009345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\""