Oct 9 07:13:29.880083 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:13:29.880118 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:13:29.880149 kernel: BIOS-provided physical RAM map: Oct 9 07:13:29.880157 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:13:29.880163 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:13:29.880169 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:13:29.880176 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 9 07:13:29.880183 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 9 07:13:29.880189 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 07:13:29.880198 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 9 07:13:29.880204 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:13:29.880210 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:13:29.880217 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 9 07:13:29.880223 kernel: NX (Execute Disable) protection: active Oct 9 07:13:29.880230 kernel: APIC: Static calls initialized Oct 9 07:13:29.880240 kernel: SMBIOS 2.8 present. Oct 9 07:13:29.880250 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 9 07:13:29.880257 kernel: Hypervisor detected: KVM Oct 9 07:13:29.880263 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:13:29.880270 kernel: kvm-clock: using sched offset of 2604932180 cycles Oct 9 07:13:29.880277 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:13:29.880284 kernel: tsc: Detected 2794.750 MHz processor Oct 9 07:13:29.880292 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:13:29.880299 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:13:29.880309 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 9 07:13:29.880316 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:13:29.880323 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:13:29.880330 kernel: Using GB pages for direct mapping Oct 9 07:13:29.880337 kernel: ACPI: Early table checksum verification disabled Oct 9 07:13:29.880354 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 9 07:13:29.880361 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:13:29.880369 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:13:29.880376 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:13:29.880385 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 9 07:13:29.880392 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:13:29.880399 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:13:29.880406 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:13:29.880413 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:13:29.880420 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Oct 9 07:13:29.880427 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Oct 9 07:13:29.880438 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 9 07:13:29.880447 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Oct 9 07:13:29.880454 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Oct 9 07:13:29.880462 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Oct 9 07:13:29.880469 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Oct 9 07:13:29.880476 kernel: No NUMA configuration found Oct 9 07:13:29.880483 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 9 07:13:29.880493 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 9 07:13:29.880500 kernel: Zone ranges: Oct 9 07:13:29.880507 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:13:29.880514 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 9 07:13:29.880521 kernel: Normal empty Oct 9 07:13:29.880528 kernel: Movable zone start for each node Oct 9 07:13:29.880535 kernel: Early memory node ranges Oct 9 07:13:29.880543 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:13:29.880550 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 9 07:13:29.880557 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 9 07:13:29.880568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:13:29.880578 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:13:29.880587 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 9 07:13:29.880596 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:13:29.880605 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:13:29.880615 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:13:29.880625 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:13:29.880635 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:13:29.880644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:13:29.880654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:13:29.880661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:13:29.880668 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:13:29.880676 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:13:29.880683 kernel: TSC deadline timer available Oct 9 07:13:29.880690 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 07:13:29.880697 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:13:29.880704 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 07:13:29.880711 kernel: kvm-guest: setup PV sched yield Oct 9 07:13:29.880721 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 9 07:13:29.880728 kernel: Booting paravirtualized kernel on KVM Oct 9 07:13:29.880736 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:13:29.880743 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 07:13:29.880750 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 07:13:29.880757 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 07:13:29.880764 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 07:13:29.880771 kernel: kvm-guest: PV spinlocks enabled Oct 9 07:13:29.880779 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 07:13:29.880789 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:13:29.880797 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:13:29.880804 kernel: random: crng init done Oct 9 07:13:29.880811 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 07:13:29.880819 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:13:29.880826 kernel: Fallback order for Node 0: 0 Oct 9 07:13:29.880833 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 9 07:13:29.880840 kernel: Policy zone: DMA32 Oct 9 07:13:29.880850 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:13:29.880858 kernel: Memory: 2428444K/2571752K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 143048K reserved, 0K cma-reserved) Oct 9 07:13:29.880874 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 07:13:29.880882 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:13:29.880889 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:13:29.880896 kernel: Dynamic Preempt: voluntary Oct 9 07:13:29.880903 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:13:29.880911 kernel: rcu: RCU event tracing is enabled. Oct 9 07:13:29.880918 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 07:13:29.880928 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:13:29.880935 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:13:29.880943 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:13:29.880950 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:13:29.880957 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 07:13:29.880965 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 07:13:29.880972 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:13:29.880979 kernel: Console: colour VGA+ 80x25 Oct 9 07:13:29.880986 kernel: printk: console [ttyS0] enabled Oct 9 07:13:29.880995 kernel: ACPI: Core revision 20230628 Oct 9 07:13:29.881003 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:13:29.881010 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:13:29.881017 kernel: x2apic enabled Oct 9 07:13:29.881024 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:13:29.881032 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 07:13:29.881039 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 07:13:29.881047 kernel: kvm-guest: setup PV IPIs Oct 9 07:13:29.881063 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:13:29.881071 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 07:13:29.881078 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 9 07:13:29.881086 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 07:13:29.881096 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 07:13:29.881103 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 07:13:29.881111 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:13:29.881118 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:13:29.881126 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:13:29.881136 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:13:29.881143 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 07:13:29.881151 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 07:13:29.881159 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:13:29.881166 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:13:29.881174 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 07:13:29.881182 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 07:13:29.881189 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 07:13:29.881199 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:13:29.881207 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:13:29.881214 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:13:29.881222 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:13:29.881229 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 07:13:29.881237 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:13:29.881244 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:13:29.881252 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:13:29.881259 kernel: SELinux: Initializing. Oct 9 07:13:29.881269 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 07:13:29.881277 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 07:13:29.881284 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 07:13:29.881292 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:13:29.881299 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:13:29.881307 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:13:29.881314 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 07:13:29.881322 kernel: ... version: 0 Oct 9 07:13:29.881329 kernel: ... bit width: 48 Oct 9 07:13:29.881349 kernel: ... generic registers: 6 Oct 9 07:13:29.881364 kernel: ... value mask: 0000ffffffffffff Oct 9 07:13:29.881380 kernel: ... max period: 00007fffffffffff Oct 9 07:13:29.881395 kernel: ... fixed-purpose events: 0 Oct 9 07:13:29.881402 kernel: ... event mask: 000000000000003f Oct 9 07:13:29.881410 kernel: signal: max sigframe size: 1776 Oct 9 07:13:29.881417 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:13:29.881425 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:13:29.881432 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:13:29.881443 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:13:29.881451 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 07:13:29.881458 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 07:13:29.881469 kernel: smpboot: Max logical packages: 1 Oct 9 07:13:29.881477 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 9 07:13:29.881484 kernel: devtmpfs: initialized Oct 9 07:13:29.881491 kernel: x86/mm: Memory block size: 128MB Oct 9 07:13:29.881499 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:13:29.881507 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 07:13:29.881517 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:13:29.881525 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:13:29.881532 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:13:29.881540 kernel: audit: type=2000 audit(1728458009.665:1): state=initialized audit_enabled=0 res=1 Oct 9 07:13:29.881547 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:13:29.881555 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:13:29.881562 kernel: cpuidle: using governor menu Oct 9 07:13:29.881570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:13:29.881577 kernel: dca service started, version 1.12.1 Oct 9 07:13:29.881587 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 07:13:29.881595 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 07:13:29.881602 kernel: PCI: Using configuration type 1 for base access Oct 9 07:13:29.881610 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:13:29.881618 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 07:13:29.881625 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 07:13:29.881633 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:13:29.881640 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:13:29.881648 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:13:29.881657 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:13:29.881665 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:13:29.881672 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:13:29.881680 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:13:29.881687 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:13:29.881695 kernel: ACPI: Interpreter enabled Oct 9 07:13:29.881702 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 07:13:29.881709 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:13:29.881717 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:13:29.881727 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:13:29.881734 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 07:13:29.881742 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:13:29.881926 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:13:29.882058 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 07:13:29.882182 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 07:13:29.882192 kernel: PCI host bridge to bus 0000:00 Oct 9 07:13:29.882324 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:13:29.882493 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:13:29.882618 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:13:29.882730 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 07:13:29.882843 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 07:13:29.882967 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 9 07:13:29.883081 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:13:29.883233 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 07:13:29.883384 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 07:13:29.883513 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 9 07:13:29.883641 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 9 07:13:29.883762 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 9 07:13:29.883895 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:13:29.884029 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 07:13:29.884185 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 9 07:13:29.884323 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 9 07:13:29.884538 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 9 07:13:29.884671 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:13:29.884793 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:13:29.884927 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 9 07:13:29.885049 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 9 07:13:29.885185 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:13:29.885307 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 9 07:13:29.885446 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 9 07:13:29.885570 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 9 07:13:29.885698 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 9 07:13:29.885827 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 07:13:29.885967 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 07:13:29.886097 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 07:13:29.886220 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 9 07:13:29.886354 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 9 07:13:29.886487 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 07:13:29.886609 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 9 07:13:29.886619 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:13:29.886632 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:13:29.886640 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:13:29.886647 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:13:29.886655 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 07:13:29.886662 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 07:13:29.886670 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 07:13:29.886677 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 07:13:29.886685 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 07:13:29.886692 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 07:13:29.886702 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 07:13:29.886710 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 07:13:29.886717 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 07:13:29.886725 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 07:13:29.886732 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 07:13:29.886740 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 07:13:29.886747 kernel: iommu: Default domain type: Translated Oct 9 07:13:29.886755 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:13:29.886762 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:13:29.886772 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:13:29.886780 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:13:29.886787 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 9 07:13:29.886920 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 07:13:29.887044 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 07:13:29.887167 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:13:29.887176 kernel: vgaarb: loaded Oct 9 07:13:29.887184 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:13:29.887195 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:13:29.887203 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:13:29.887210 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:13:29.887218 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:13:29.887226 kernel: pnp: PnP ACPI init Oct 9 07:13:29.887402 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 07:13:29.887414 kernel: pnp: PnP ACPI: found 6 devices Oct 9 07:13:29.887422 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:13:29.887434 kernel: NET: Registered PF_INET protocol family Oct 9 07:13:29.887442 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 07:13:29.887449 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 07:13:29.887457 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:13:29.887465 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:13:29.887473 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 07:13:29.887480 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 07:13:29.887488 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 07:13:29.887495 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 07:13:29.887505 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:13:29.887513 kernel: NET: Registered PF_XDP protocol family Oct 9 07:13:29.887630 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:13:29.887743 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:13:29.887855 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:13:29.887978 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 07:13:29.888090 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 07:13:29.888202 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 9 07:13:29.888216 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:13:29.888223 kernel: Initialise system trusted keyrings Oct 9 07:13:29.888231 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 07:13:29.888239 kernel: Key type asymmetric registered Oct 9 07:13:29.888246 kernel: Asymmetric key parser 'x509' registered Oct 9 07:13:29.888254 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:13:29.888261 kernel: io scheduler mq-deadline registered Oct 9 07:13:29.888269 kernel: io scheduler kyber registered Oct 9 07:13:29.888276 kernel: io scheduler bfq registered Oct 9 07:13:29.888284 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:13:29.888295 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 07:13:29.888303 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 07:13:29.888311 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 07:13:29.888318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:13:29.888326 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:13:29.888333 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:13:29.888353 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:13:29.888361 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:13:29.888490 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 07:13:29.888505 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:13:29.888622 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 07:13:29.888738 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T07:13:29 UTC (1728458009) Oct 9 07:13:29.888854 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 07:13:29.888873 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 07:13:29.888880 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:13:29.888888 kernel: Segment Routing with IPv6 Oct 9 07:13:29.888895 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:13:29.888907 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:13:29.888914 kernel: Key type dns_resolver registered Oct 9 07:13:29.888922 kernel: IPI shorthand broadcast: enabled Oct 9 07:13:29.888929 kernel: sched_clock: Marking stable (718003901, 104563036)->(840941785, -18374848) Oct 9 07:13:29.888937 kernel: registered taskstats version 1 Oct 9 07:13:29.888944 kernel: Loading compiled-in X.509 certificates Oct 9 07:13:29.888952 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:13:29.888959 kernel: Key type .fscrypt registered Oct 9 07:13:29.888967 kernel: Key type fscrypt-provisioning registered Oct 9 07:13:29.888977 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:13:29.888984 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:13:29.888992 kernel: ima: No architecture policies found Oct 9 07:13:29.888999 kernel: clk: Disabling unused clocks Oct 9 07:13:29.889007 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:13:29.889014 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:13:29.889022 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:13:29.889029 kernel: Run /init as init process Oct 9 07:13:29.889039 kernel: with arguments: Oct 9 07:13:29.889046 kernel: /init Oct 9 07:13:29.889054 kernel: with environment: Oct 9 07:13:29.889061 kernel: HOME=/ Oct 9 07:13:29.889069 kernel: TERM=linux Oct 9 07:13:29.889076 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:13:29.889085 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:13:29.889095 systemd[1]: Detected virtualization kvm. Oct 9 07:13:29.889106 systemd[1]: Detected architecture x86-64. Oct 9 07:13:29.889113 systemd[1]: Running in initrd. Oct 9 07:13:29.889121 systemd[1]: No hostname configured, using default hostname. Oct 9 07:13:29.889129 systemd[1]: Hostname set to . Oct 9 07:13:29.889137 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:13:29.889145 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:13:29.889153 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:13:29.889161 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:13:29.889172 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:13:29.889181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:13:29.889201 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:13:29.889212 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:13:29.889222 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:13:29.889232 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:13:29.889241 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:13:29.889249 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:13:29.889257 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:13:29.889265 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:13:29.889274 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:13:29.889282 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:13:29.889290 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:13:29.889300 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:13:29.889309 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:13:29.889317 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:13:29.889325 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:13:29.889334 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:13:29.889399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:13:29.889407 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:13:29.889416 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:13:29.889424 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:13:29.889435 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:13:29.889443 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:13:29.889452 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:13:29.889460 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:13:29.889468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:13:29.889476 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:13:29.889485 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:13:29.889493 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:13:29.889505 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:13:29.889513 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:13:29.889540 systemd-journald[193]: Collecting audit messages is disabled. Oct 9 07:13:29.889561 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:13:29.889570 systemd-journald[193]: Journal started Oct 9 07:13:29.889590 systemd-journald[193]: Runtime Journal (/run/log/journal/cc3180b570ce4a00b2603829ae02a776) is 6.0M, max 48.4M, 42.3M free. Oct 9 07:13:29.878542 systemd-modules-load[194]: Inserted module 'overlay' Oct 9 07:13:29.917413 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:13:29.919121 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:13:29.924360 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:13:29.927360 kernel: Bridge firewalling registered Oct 9 07:13:29.927391 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 9 07:13:29.929532 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:13:29.932579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:13:29.933958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:13:29.940937 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:13:29.943528 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:13:29.947596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:13:29.950244 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:13:29.963974 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:13:29.973049 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:13:29.974715 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:13:29.983830 dracut-cmdline[224]: dracut-dracut-053 Oct 9 07:13:29.987122 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:13:30.010397 systemd-resolved[229]: Positive Trust Anchors: Oct 9 07:13:30.010415 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:13:30.010457 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:13:30.013033 systemd-resolved[229]: Defaulting to hostname 'linux'. Oct 9 07:13:30.014176 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:13:30.019737 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:13:30.082377 kernel: SCSI subsystem initialized Oct 9 07:13:30.092359 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:13:30.105368 kernel: iscsi: registered transport (tcp) Oct 9 07:13:30.129432 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:13:30.129454 kernel: QLogic iSCSI HBA Driver Oct 9 07:13:30.174179 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:13:30.181509 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:13:30.208313 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:13:30.208400 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:13:30.208427 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:13:30.253364 kernel: raid6: avx2x4 gen() 30650 MB/s Oct 9 07:13:30.270359 kernel: raid6: avx2x2 gen() 31337 MB/s Oct 9 07:13:30.287427 kernel: raid6: avx2x1 gen() 25732 MB/s Oct 9 07:13:30.287443 kernel: raid6: using algorithm avx2x2 gen() 31337 MB/s Oct 9 07:13:30.305436 kernel: raid6: .... xor() 19994 MB/s, rmw enabled Oct 9 07:13:30.305457 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:13:30.330361 kernel: xor: automatically using best checksumming function avx Oct 9 07:13:30.504369 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:13:30.518486 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:13:30.526595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:13:30.541797 systemd-udevd[412]: Using default interface naming scheme 'v255'. Oct 9 07:13:30.547552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:13:30.550392 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:13:30.567558 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Oct 9 07:13:30.602921 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:13:30.615451 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:13:30.679532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:13:30.689566 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:13:30.702893 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:13:30.705754 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:13:30.709270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:13:30.712411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:13:30.719732 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 07:13:30.721041 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 07:13:30.721215 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:13:30.730512 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:13:30.736779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:13:30.742469 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:13:30.742493 kernel: GPT:9289727 != 19775487 Oct 9 07:13:30.742508 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:13:30.742519 kernel: GPT:9289727 != 19775487 Oct 9 07:13:30.742529 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:13:30.742539 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:13:30.739118 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:13:30.742576 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:13:30.742786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:13:30.742916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:13:30.743153 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:13:30.750082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:13:30.755167 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:13:30.756653 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:13:30.756673 kernel: AES CTR mode by8 optimization enabled Oct 9 07:13:30.780029 kernel: libata version 3.00 loaded. Oct 9 07:13:30.797366 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 07:13:30.797586 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 07:13:30.798375 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 07:13:30.798552 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 07:13:30.805356 kernel: scsi host0: ahci Oct 9 07:13:30.805666 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Oct 9 07:13:30.803946 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:13:30.831396 kernel: scsi host1: ahci Oct 9 07:13:30.831577 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (466) Oct 9 07:13:30.831589 kernel: scsi host2: ahci Oct 9 07:13:30.831762 kernel: scsi host3: ahci Oct 9 07:13:30.831918 kernel: scsi host4: ahci Oct 9 07:13:30.832064 kernel: scsi host5: ahci Oct 9 07:13:30.832208 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 9 07:13:30.832219 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 9 07:13:30.832229 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 9 07:13:30.832239 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 9 07:13:30.832253 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 9 07:13:30.832262 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 9 07:13:30.834313 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:13:30.845838 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:13:30.853205 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:13:30.859503 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:13:30.862223 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:13:30.878615 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:13:30.882034 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:13:30.888804 disk-uuid[567]: Primary Header is updated. Oct 9 07:13:30.888804 disk-uuid[567]: Secondary Entries is updated. Oct 9 07:13:30.888804 disk-uuid[567]: Secondary Header is updated. Oct 9 07:13:30.892458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:13:30.910702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:13:31.125126 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 07:13:31.125212 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 07:13:31.125223 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 07:13:31.125234 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 07:13:31.126368 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 07:13:31.127410 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 07:13:31.127439 kernel: ata3.00: applying bridge limits Oct 9 07:13:31.128373 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 07:13:31.129364 kernel: ata3.00: configured for UDMA/100 Oct 9 07:13:31.130373 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 07:13:31.174412 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 07:13:31.174780 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 07:13:31.188465 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 07:13:31.900374 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:13:31.900785 disk-uuid[571]: The operation has completed successfully. Oct 9 07:13:31.930109 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:13:31.930245 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:13:31.960595 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:13:31.969027 sh[596]: Success Oct 9 07:13:31.988377 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 07:13:32.026452 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:13:32.040218 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:13:32.043096 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:13:32.054950 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:13:32.054977 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:13:32.054988 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:13:32.055959 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:13:32.056683 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:13:32.061940 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:13:32.062189 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:13:32.070461 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:13:32.072062 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:13:32.080376 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:13:32.080433 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:13:32.080444 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:13:32.083373 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:13:32.093334 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:13:32.094738 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:13:32.103710 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:13:32.111624 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:13:32.218311 ignition[688]: Ignition 2.18.0 Oct 9 07:13:32.218720 ignition[688]: Stage: fetch-offline Oct 9 07:13:32.218762 ignition[688]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:13:32.218773 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:13:32.218991 ignition[688]: parsed url from cmdline: "" Oct 9 07:13:32.218995 ignition[688]: no config URL provided Oct 9 07:13:32.219001 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:13:32.223036 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:13:32.219014 ignition[688]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:13:32.219045 ignition[688]: op(1): [started] loading QEMU firmware config module Oct 9 07:13:32.219052 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 07:13:32.229430 ignition[688]: op(1): [finished] loading QEMU firmware config module Oct 9 07:13:32.232478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:13:32.256603 systemd-networkd[785]: lo: Link UP Oct 9 07:13:32.256613 systemd-networkd[785]: lo: Gained carrier Oct 9 07:13:32.258146 systemd-networkd[785]: Enumeration completed Oct 9 07:13:32.258215 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:13:32.258535 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:13:32.258539 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:13:32.259367 systemd-networkd[785]: eth0: Link UP Oct 9 07:13:32.259370 systemd-networkd[785]: eth0: Gained carrier Oct 9 07:13:32.259377 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:13:32.260289 systemd[1]: Reached target network.target - Network. Oct 9 07:13:32.277381 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 07:13:32.283285 ignition[688]: parsing config with SHA512: 8c4eca58dde8dc056b1ae6ff3ede294dddd6778c69a4d923bff2076da7606bbf34093d463191f16c2eb238b28b2bbf6e51885a38181a923d61f3804d3b08b091 Oct 9 07:13:32.287247 unknown[688]: fetched base config from "system" Oct 9 07:13:32.287265 unknown[688]: fetched user config from "qemu" Oct 9 07:13:32.288839 ignition[688]: fetch-offline: fetch-offline passed Oct 9 07:13:32.288967 ignition[688]: Ignition finished successfully Oct 9 07:13:32.291391 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:13:32.293902 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 07:13:32.302480 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:13:32.322519 ignition[789]: Ignition 2.18.0 Oct 9 07:13:32.322531 ignition[789]: Stage: kargs Oct 9 07:13:32.322688 ignition[789]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:13:32.322700 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:13:32.323560 ignition[789]: kargs: kargs passed Oct 9 07:13:32.326987 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:13:32.323610 ignition[789]: Ignition finished successfully Oct 9 07:13:32.335609 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:13:32.351083 ignition[798]: Ignition 2.18.0 Oct 9 07:13:32.351094 ignition[798]: Stage: disks Oct 9 07:13:32.351249 ignition[798]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:13:32.351261 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:13:32.354215 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:13:32.352107 ignition[798]: disks: disks passed Oct 9 07:13:32.355946 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:13:32.352152 ignition[798]: Ignition finished successfully Oct 9 07:13:32.357858 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:13:32.359097 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:13:32.360718 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:13:32.361749 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:13:32.380460 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:13:32.391239 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.30 Oct 9 07:13:32.391254 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Oct 9 07:13:32.394278 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:13:32.399965 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:13:32.402265 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:13:32.523372 kernel: EXT4-fs (vda9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:13:32.523982 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:13:32.525396 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:13:32.533407 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:13:32.535159 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:13:32.536214 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 07:13:32.536248 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:13:32.536269 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:13:32.544198 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:13:32.545745 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:13:32.555446 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) Oct 9 07:13:32.555474 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:13:32.555485 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:13:32.557364 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:13:32.560361 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:13:32.561950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:13:32.590097 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:13:32.595102 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:13:32.599754 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:13:32.604538 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:13:32.695857 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:13:32.708444 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:13:32.710291 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:13:32.717366 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:13:32.739445 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:13:32.757476 ignition[933]: INFO : Ignition 2.18.0 Oct 9 07:13:32.757476 ignition[933]: INFO : Stage: mount Oct 9 07:13:32.759455 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:13:32.759455 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:13:32.759455 ignition[933]: INFO : mount: mount passed Oct 9 07:13:32.759455 ignition[933]: INFO : Ignition finished successfully Oct 9 07:13:32.760792 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:13:32.773506 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:13:33.054517 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:13:33.068590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:13:33.076017 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (947) Oct 9 07:13:33.076064 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:13:33.076076 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:13:33.077495 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:13:33.080363 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:13:33.082093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:13:33.112393 ignition[964]: INFO : Ignition 2.18.0 Oct 9 07:13:33.112393 ignition[964]: INFO : Stage: files Oct 9 07:13:33.114368 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:13:33.114368 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:13:33.114368 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:13:33.114368 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:13:33.114368 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:13:33.121156 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:13:33.121156 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:13:33.121156 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:13:33.121156 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:13:33.121156 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:13:33.121156 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:13:33.121156 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:13:33.117458 unknown[964]: wrote ssh authorized keys file for user: core Oct 9 07:13:33.168398 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 07:13:33.386157 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:13:33.386157 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:13:33.390057 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:13:33.391748 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:13:33.393607 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:13:33.395309 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:13:33.397085 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:13:33.398788 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:13:33.400599 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:13:33.402528 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:13:33.404428 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:13:33.406200 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:13:33.408732 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:13:33.411165 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:13:33.413264 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 07:13:33.813640 systemd-networkd[785]: eth0: Gained IPv6LL Oct 9 07:13:33.987251 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 07:13:35.219088 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:13:35.219088 ignition[964]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Oct 9 07:13:35.223162 ignition[964]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 07:13:35.250601 ignition[964]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 07:13:35.255776 ignition[964]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 07:13:35.257390 ignition[964]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 07:13:35.257390 ignition[964]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:13:35.257390 ignition[964]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:13:35.257390 ignition[964]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:13:35.257390 ignition[964]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:13:35.257390 ignition[964]: INFO : files: files passed Oct 9 07:13:35.257390 ignition[964]: INFO : Ignition finished successfully Oct 9 07:13:35.259564 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:13:35.267644 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:13:35.269776 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:13:35.272530 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:13:35.272660 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:13:35.280594 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 07:13:35.283637 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:13:35.283637 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:13:35.287980 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:13:35.285904 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:13:35.288573 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:13:35.302473 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:13:35.327778 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:13:35.328844 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:13:35.331460 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:13:35.333463 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:13:35.335487 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:13:35.351477 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:13:35.364757 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:13:35.366203 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:13:35.390831 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:13:35.392090 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:13:35.394279 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:13:35.396262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:13:35.396389 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:13:35.398584 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:13:35.400403 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:13:35.402430 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:13:35.404427 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:13:35.406432 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:13:35.408596 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:13:35.410713 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:13:35.412967 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:13:35.414948 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:13:35.417129 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:13:35.418901 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:13:35.419022 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:13:35.421148 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:13:35.422754 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:13:35.424824 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:13:35.424923 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:13:35.427036 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:13:35.427143 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:13:35.429372 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:13:35.429481 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:13:35.431511 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:13:35.433264 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:13:35.436481 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:13:35.437939 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:13:35.439760 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:13:35.441830 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:13:35.441934 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:13:35.443645 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:13:35.443743 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:13:35.445709 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:13:35.445829 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:13:35.448413 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:13:35.448533 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:13:35.460482 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:13:35.462023 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:13:35.463223 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:13:35.463371 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:13:35.465374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:13:35.465479 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:13:35.470743 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:13:35.470855 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:13:35.479873 ignition[1019]: INFO : Ignition 2.18.0 Oct 9 07:13:35.479873 ignition[1019]: INFO : Stage: umount Oct 9 07:13:35.481689 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:13:35.481689 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:13:35.481689 ignition[1019]: INFO : umount: umount passed Oct 9 07:13:35.481689 ignition[1019]: INFO : Ignition finished successfully Oct 9 07:13:35.483194 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:13:35.483312 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:13:35.485149 systemd[1]: Stopped target network.target - Network. Oct 9 07:13:35.486663 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:13:35.486725 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:13:35.488566 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:13:35.488618 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:13:35.490765 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:13:35.490827 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:13:35.492739 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:13:35.492800 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:13:35.494796 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:13:35.496713 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:13:35.499653 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:13:35.501381 systemd-networkd[785]: eth0: DHCPv6 lease lost Oct 9 07:13:35.504582 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:13:35.504740 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:13:35.507290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:13:35.507332 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:13:35.517432 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:13:35.519378 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:13:35.519433 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:13:35.521796 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:13:35.527035 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:13:35.527194 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:13:35.532302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:13:35.532389 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:13:35.534465 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:13:35.534515 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:13:35.536527 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:13:35.536575 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:13:35.544583 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:13:35.544794 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:13:35.547648 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:13:35.547764 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:13:35.549224 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:13:35.549267 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:13:35.551230 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:13:35.551283 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:13:35.553463 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:13:35.553512 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:13:35.555432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:13:35.555479 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:13:35.563522 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:13:35.564851 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:13:35.564914 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:13:35.567233 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 07:13:35.567282 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:13:35.569511 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:13:35.569560 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:13:35.572054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:13:35.572103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:13:35.574753 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:13:35.574869 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:13:35.577054 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:13:35.577158 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:13:35.653026 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:13:35.653151 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:13:35.655214 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:13:35.655918 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:13:35.655970 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:13:35.665568 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:13:35.674818 systemd[1]: Switching root. Oct 9 07:13:35.711516 systemd-journald[193]: Journal stopped Oct 9 07:13:36.842212 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 9 07:13:36.842287 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:13:36.842307 kernel: SELinux: policy capability open_perms=1 Oct 9 07:13:36.842322 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:13:36.842334 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:13:36.842358 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:13:36.842370 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:13:36.842381 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:13:36.842393 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:13:36.842405 kernel: audit: type=1403 audit(1728458016.109:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:13:36.842417 systemd[1]: Successfully loaded SELinux policy in 41.584ms. Oct 9 07:13:36.842447 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.514ms. Oct 9 07:13:36.842461 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:13:36.842474 systemd[1]: Detected virtualization kvm. Oct 9 07:13:36.842486 systemd[1]: Detected architecture x86-64. Oct 9 07:13:36.842499 systemd[1]: Detected first boot. Oct 9 07:13:36.842511 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:13:36.842523 zram_generator::config[1080]: No configuration found. Oct 9 07:13:36.842537 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:13:36.842549 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:13:36.842565 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:13:36.842578 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:13:36.842592 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:13:36.842604 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:13:36.842617 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:13:36.842629 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:13:36.842642 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:13:36.842655 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:13:36.842676 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:13:36.842689 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:13:36.842702 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:13:36.842714 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:13:36.842727 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:13:36.842740 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:13:36.842752 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:13:36.842765 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:13:36.842777 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:13:36.842792 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:13:36.842805 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:13:36.842817 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:13:36.842830 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:13:36.842842 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:13:36.842855 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:13:36.842869 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:13:36.842881 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:13:36.842897 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:13:36.842915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:13:36.842928 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:13:36.842941 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:13:36.842953 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:13:36.842966 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:13:36.842978 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:13:36.842991 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:13:36.843003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:13:36.843021 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:13:36.843033 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:13:36.843045 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:13:36.843058 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:13:36.843070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:13:36.843083 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:13:36.843095 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:13:36.843108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:13:36.843120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:13:36.843135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:13:36.843149 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:13:36.843162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:13:36.843175 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:13:36.843187 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 07:13:36.843205 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 07:13:36.843218 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:13:36.843230 kernel: fuse: init (API version 7.39) Oct 9 07:13:36.843244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:13:36.843257 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:13:36.843269 kernel: loop: module loaded Oct 9 07:13:36.843281 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:13:36.843294 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:13:36.843307 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:13:36.843320 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:13:36.843332 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:13:36.843356 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:13:36.843372 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:13:36.843402 systemd-journald[1165]: Collecting audit messages is disabled. Oct 9 07:13:36.843425 systemd-journald[1165]: Journal started Oct 9 07:13:36.843447 systemd-journald[1165]: Runtime Journal (/run/log/journal/cc3180b570ce4a00b2603829ae02a776) is 6.0M, max 48.4M, 42.3M free. Oct 9 07:13:36.846968 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:13:36.847915 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:13:36.849250 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:13:36.850656 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:13:36.852236 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:13:36.852482 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:13:36.854149 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:13:36.855674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:13:36.855890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:13:36.857399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:13:36.857618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:13:36.859182 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:13:36.859401 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:13:36.861000 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:13:36.861206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:13:36.862487 kernel: ACPI: bus type drm_connector registered Oct 9 07:13:36.863785 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:13:36.865644 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:13:36.865876 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:13:36.867499 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:13:36.869274 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:13:36.883157 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:13:36.892459 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:13:36.894800 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:13:36.895973 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:13:36.899419 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:13:36.903101 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:13:36.904301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:13:36.907508 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:13:36.908631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:13:36.914521 systemd-journald[1165]: Time spent on flushing to /var/log/journal/cc3180b570ce4a00b2603829ae02a776 is 22.301ms for 939 entries. Oct 9 07:13:36.914521 systemd-journald[1165]: System Journal (/var/log/journal/cc3180b570ce4a00b2603829ae02a776) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:13:36.960133 systemd-journald[1165]: Received client request to flush runtime journal. Oct 9 07:13:36.911444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:13:36.915622 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:13:36.921116 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:13:36.922590 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:13:36.932482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:13:36.936316 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:13:36.939364 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:13:36.948588 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:13:36.958846 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:13:36.960774 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Oct 9 07:13:36.960787 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Oct 9 07:13:36.962300 udevadm[1225]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 07:13:36.964267 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:13:36.968223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:13:36.975489 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:13:37.000037 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:13:37.008468 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:13:37.034740 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Oct 9 07:13:37.034760 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Oct 9 07:13:37.040735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:13:37.578371 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:13:37.589526 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:13:37.616139 systemd-udevd[1250]: Using default interface naming scheme 'v255'. Oct 9 07:13:37.631941 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:13:37.642595 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:13:37.658518 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:13:37.685303 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1257) Oct 9 07:13:37.691898 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 9 07:13:37.702444 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1267) Oct 9 07:13:37.749712 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:13:37.788370 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 9 07:13:37.800114 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 07:13:37.800449 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 07:13:37.800678 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:13:37.800696 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 07:13:37.803820 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 9 07:13:37.875370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:13:37.880301 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:13:37.885128 systemd-networkd[1259]: lo: Link UP Oct 9 07:13:37.885144 systemd-networkd[1259]: lo: Gained carrier Oct 9 07:13:37.889370 systemd-networkd[1259]: Enumeration completed Oct 9 07:13:37.889817 systemd-networkd[1259]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:13:37.889829 systemd-networkd[1259]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:13:37.891540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:13:37.892926 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:13:37.893584 systemd-networkd[1259]: eth0: Link UP Oct 9 07:13:37.893594 systemd-networkd[1259]: eth0: Gained carrier Oct 9 07:13:37.893608 systemd-networkd[1259]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:13:37.905746 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:13:37.943529 systemd-networkd[1259]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 07:13:38.007600 kernel: kvm_amd: TSC scaling supported Oct 9 07:13:38.007687 kernel: kvm_amd: Nested Virtualization enabled Oct 9 07:13:38.007718 kernel: kvm_amd: Nested Paging enabled Oct 9 07:13:38.007740 kernel: kvm_amd: LBR virtualization supported Oct 9 07:13:38.008851 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 07:13:38.008887 kernel: kvm_amd: Virtual GIF supported Oct 9 07:13:38.029398 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:13:38.063868 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:13:38.076474 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:13:38.078476 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:13:38.087081 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:13:38.126508 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:13:38.128059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:13:38.141466 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:13:38.146581 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:13:38.181383 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:13:38.182851 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:13:38.184119 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:13:38.184144 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:13:38.185289 systemd[1]: Reached target machines.target - Containers. Oct 9 07:13:38.187363 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:13:38.203452 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:13:38.205928 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:13:38.207122 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:13:38.208069 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:13:38.211009 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:13:38.214787 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:13:38.218539 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:13:38.224956 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:13:38.232176 kernel: loop0: detected capacity change from 0 to 139904 Oct 9 07:13:38.232262 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:13:38.244812 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:13:38.246023 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:13:38.258363 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:13:38.277378 kernel: loop1: detected capacity change from 0 to 80568 Oct 9 07:13:38.316404 kernel: loop2: detected capacity change from 0 to 211296 Oct 9 07:13:38.347368 kernel: loop3: detected capacity change from 0 to 139904 Oct 9 07:13:38.357386 kernel: loop4: detected capacity change from 0 to 80568 Oct 9 07:13:38.366357 kernel: loop5: detected capacity change from 0 to 211296 Oct 9 07:13:38.371558 (sd-merge)[1327]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 07:13:38.372233 (sd-merge)[1327]: Merged extensions into '/usr'. Oct 9 07:13:38.376326 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:13:38.376358 systemd[1]: Reloading... Oct 9 07:13:38.422377 zram_generator::config[1353]: No configuration found. Oct 9 07:13:38.462091 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:13:38.549056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:13:38.612786 systemd[1]: Reloading finished in 235 ms. Oct 9 07:13:38.630296 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:13:38.632058 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:13:38.643476 systemd[1]: Starting ensure-sysext.service... Oct 9 07:13:38.645572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:13:38.649922 systemd[1]: Reloading requested from client PID 1397 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:13:38.649940 systemd[1]: Reloading... Oct 9 07:13:38.678662 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:13:38.679021 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:13:38.680051 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:13:38.684141 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Oct 9 07:13:38.684253 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Oct 9 07:13:38.690320 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:13:38.690418 zram_generator::config[1430]: No configuration found. Oct 9 07:13:38.690336 systemd-tmpfiles[1404]: Skipping /boot Oct 9 07:13:38.701330 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:13:38.701357 systemd-tmpfiles[1404]: Skipping /boot Oct 9 07:13:38.813587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:13:38.878767 systemd[1]: Reloading finished in 228 ms. Oct 9 07:13:38.900076 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:13:38.917579 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:13:38.920178 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:13:38.922838 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:13:38.927249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:13:38.930479 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:13:38.937265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:13:38.937568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:13:38.939049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:13:38.954734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:13:38.958696 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:13:38.959943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:13:38.960071 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:13:38.966852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:13:38.967108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:13:38.970281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:13:38.970523 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:13:38.972378 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:13:38.972722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:13:38.974731 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:13:38.984925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:13:38.985288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:13:38.991186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:13:38.994021 augenrules[1510]: No rules Oct 9 07:13:38.996772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:13:39.000857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:13:39.002098 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:13:39.005630 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:13:39.006681 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:13:39.008272 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:13:39.010183 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:13:39.012198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:13:39.012495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:13:39.014131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:13:39.014662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:13:39.016945 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:13:39.017163 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:13:39.018295 systemd-resolved[1479]: Positive Trust Anchors: Oct 9 07:13:39.018726 systemd-resolved[1479]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:13:39.018761 systemd-resolved[1479]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:13:39.019156 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:13:39.026804 systemd-resolved[1479]: Defaulting to hostname 'linux'. Oct 9 07:13:39.027257 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:13:39.028815 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:13:39.034275 systemd[1]: Reached target network.target - Network. Oct 9 07:13:39.035304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:13:39.036582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:13:39.036815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:13:39.047700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:13:39.050255 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:13:39.052409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:13:39.055738 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:13:39.056902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:13:39.057021 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:13:39.057100 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:13:39.059091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:13:39.059450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:13:39.061134 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:13:39.061512 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:13:39.063580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:13:39.063872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:13:39.066934 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:13:39.067196 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:13:39.068803 systemd[1]: Finished ensure-sysext.service. Oct 9 07:13:39.074986 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:13:39.075054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:13:39.088488 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:13:39.151005 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:13:39.152621 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:13:40.018359 systemd-resolved[1479]: Clock change detected. Flushing caches. Oct 9 07:13:40.018386 systemd-timesyncd[1547]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 07:13:40.018408 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:13:40.018433 systemd-timesyncd[1547]: Initial clock synchronization to Wed 2024-10-09 07:13:40.018293 UTC. Oct 9 07:13:40.019673 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:13:40.020933 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:13:40.022219 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:13:40.022255 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:13:40.023150 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:13:40.024356 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:13:40.025618 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:13:40.026853 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:13:40.028481 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:13:40.031564 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:13:40.034098 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:13:40.045539 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:13:40.046761 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:13:40.047840 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:13:40.048950 systemd[1]: System is tainted: cgroupsv1 Oct 9 07:13:40.048990 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:13:40.049012 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:13:40.050266 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:13:40.052505 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:13:40.054523 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:13:40.060161 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:13:40.061838 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:13:40.064313 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:13:40.067892 jq[1553]: false Oct 9 07:13:40.070947 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:13:40.074160 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:13:40.077226 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:13:40.084212 extend-filesystems[1554]: Found loop3 Oct 9 07:13:40.084212 extend-filesystems[1554]: Found loop4 Oct 9 07:13:40.086237 extend-filesystems[1554]: Found loop5 Oct 9 07:13:40.086237 extend-filesystems[1554]: Found sr0 Oct 9 07:13:40.086237 extend-filesystems[1554]: Found vda Oct 9 07:13:40.086237 extend-filesystems[1554]: Found vda1 Oct 9 07:13:40.086237 extend-filesystems[1554]: Found vda2 Oct 9 07:13:40.086237 extend-filesystems[1554]: Found vda3 Oct 9 07:13:40.086237 extend-filesystems[1554]: Found usr Oct 9 07:13:40.086237 extend-filesystems[1554]: Found vda4 Oct 9 07:13:40.086237 extend-filesystems[1554]: Found vda6 Oct 9 07:13:40.086237 extend-filesystems[1554]: Found vda7 Oct 9 07:13:40.086237 extend-filesystems[1554]: Found vda9 Oct 9 07:13:40.086237 extend-filesystems[1554]: Checking size of /dev/vda9 Oct 9 07:13:40.084881 dbus-daemon[1552]: [system] SELinux support is enabled Oct 9 07:13:40.086517 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:13:40.110464 extend-filesystems[1554]: Resized partition /dev/vda9 Oct 9 07:13:40.092224 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:13:40.093644 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:13:40.096012 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:13:40.096617 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:13:40.106153 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:13:40.106492 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:13:40.106833 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:13:40.108283 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:13:40.116991 extend-filesystems[1582]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:13:40.125442 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 07:13:40.125468 update_engine[1573]: I1009 07:13:40.123518 1573 main.cc:92] Flatcar Update Engine starting Oct 9 07:13:40.122681 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:13:40.126433 jq[1574]: true Oct 9 07:13:40.123031 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:13:40.136944 update_engine[1573]: I1009 07:13:40.136724 1573 update_check_scheduler.cc:74] Next update check in 3m56s Oct 9 07:13:40.140533 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:13:40.150929 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1260) Oct 9 07:13:40.154971 jq[1585]: true Oct 9 07:13:40.161936 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 07:13:40.169946 tar[1579]: linux-amd64/helm Oct 9 07:13:40.173437 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:13:40.189189 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:13:40.189189 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 07:13:40.189189 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 07:13:40.184237 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:13:40.192275 extend-filesystems[1554]: Resized filesystem in /dev/vda9 Oct 9 07:13:40.184265 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:13:40.186616 systemd-logind[1569]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:13:40.186637 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:13:40.186662 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:13:40.186688 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:13:40.187671 systemd-logind[1569]: New seat seat0. Oct 9 07:13:40.198632 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:13:40.208057 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:13:40.209735 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:13:40.211861 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:13:40.212245 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:13:40.220069 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:13:40.225494 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:13:40.227526 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 07:13:40.249308 locksmithd[1615]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:13:40.331025 sshd_keygen[1584]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:13:40.349324 containerd[1586]: time="2024-10-09T07:13:40.349230137Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:13:40.357329 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:13:40.367198 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:13:40.375168 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.375601493Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.375642961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.377331366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.377368155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.377660884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.377675101Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.377769438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.377829240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.377840030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.377954535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:13:40.378997 containerd[1586]: time="2024-10-09T07:13:40.378194525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:13:40.375484 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:13:40.379519 containerd[1586]: time="2024-10-09T07:13:40.378211978Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:13:40.379519 containerd[1586]: time="2024-10-09T07:13:40.378222507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:13:40.379519 containerd[1586]: time="2024-10-09T07:13:40.378400952Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:13:40.379519 containerd[1586]: time="2024-10-09T07:13:40.378413906Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:13:40.379519 containerd[1586]: time="2024-10-09T07:13:40.378471675Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:13:40.379519 containerd[1586]: time="2024-10-09T07:13:40.378485651Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:13:40.383990 containerd[1586]: time="2024-10-09T07:13:40.383960777Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:13:40.383990 containerd[1586]: time="2024-10-09T07:13:40.383990542Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:13:40.384048 containerd[1586]: time="2024-10-09T07:13:40.384004949Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:13:40.384048 containerd[1586]: time="2024-10-09T07:13:40.384034825Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:13:40.384084 containerd[1586]: time="2024-10-09T07:13:40.384049573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:13:40.384084 containerd[1586]: time="2024-10-09T07:13:40.384062227Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:13:40.384084 containerd[1586]: time="2024-10-09T07:13:40.384073648Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:13:40.384238 containerd[1586]: time="2024-10-09T07:13:40.384215134Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:13:40.384281 containerd[1586]: time="2024-10-09T07:13:40.384264296Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:13:40.384303 containerd[1586]: time="2024-10-09T07:13:40.384282790Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:13:40.384325 containerd[1586]: time="2024-10-09T07:13:40.384299882Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:13:40.384325 containerd[1586]: time="2024-10-09T07:13:40.384319289Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:13:40.384366 containerd[1586]: time="2024-10-09T07:13:40.384339807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:13:40.384366 containerd[1586]: time="2024-10-09T07:13:40.384355767Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:13:40.384416 containerd[1586]: time="2024-10-09T07:13:40.384373280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:13:40.384416 containerd[1586]: time="2024-10-09T07:13:40.384388829Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:13:40.384416 containerd[1586]: time="2024-10-09T07:13:40.384402575Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:13:40.384416 containerd[1586]: time="2024-10-09T07:13:40.384415189Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:13:40.384487 containerd[1586]: time="2024-10-09T07:13:40.384428013Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:13:40.384566 containerd[1586]: time="2024-10-09T07:13:40.384541636Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:13:40.385138 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385385578Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385425132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385440742Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385467582Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385527524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385541440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385553874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385566307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385579131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385592246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385604939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385616932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385631229Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:13:40.388233 containerd[1586]: time="2024-10-09T07:13:40.385821566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.388291 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:13:40.389779 containerd[1586]: time="2024-10-09T07:13:40.385838618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.389779 containerd[1586]: time="2024-10-09T07:13:40.385851502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.389779 containerd[1586]: time="2024-10-09T07:13:40.385868113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.389779 containerd[1586]: time="2024-10-09T07:13:40.385881989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.389779 containerd[1586]: time="2024-10-09T07:13:40.385895805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.389779 containerd[1586]: time="2024-10-09T07:13:40.385906575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.389779 containerd[1586]: time="2024-10-09T07:13:40.385937824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.386196058Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.386244639Z" level=info msg="Connect containerd service" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.386276599Z" level=info msg="using legacy CRI server" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.386284393Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.386400251Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.386901200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.386975559Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.386990758Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387001358Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387014713Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387063745Z" level=info msg="Start subscribing containerd event" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387134878Z" level=info msg="Start recovering state" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387218665Z" level=info msg="Start event monitor" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387230477Z" level=info msg="Start snapshots syncer" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387241067Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387248732Z" level=info msg="Start streaming server" Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387295690Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387351434Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:13:40.389927 containerd[1586]: time="2024-10-09T07:13:40.387394986Z" level=info msg="containerd successfully booted in 0.039622s" Oct 9 07:13:40.397222 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:13:40.410149 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:13:40.412263 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:13:40.413668 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:13:40.559957 tar[1579]: linux-amd64/LICENSE Oct 9 07:13:40.560027 tar[1579]: linux-amd64/README.md Oct 9 07:13:40.573170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:13:40.630110 systemd-networkd[1259]: eth0: Gained IPv6LL Oct 9 07:13:40.633583 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:13:40.635433 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:13:40.650164 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 07:13:40.653068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:13:40.655487 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:13:40.682169 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:13:40.683866 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 07:13:40.684547 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 07:13:40.687499 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:13:41.250506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:13:41.252321 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:13:41.254746 systemd[1]: Startup finished in 7.262s (kernel) + 4.319s (userspace) = 11.582s. Oct 9 07:13:41.276368 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:13:41.753494 kubelet[1689]: E1009 07:13:41.753401 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:13:41.758405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:13:41.758713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:13:48.956634 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:13:48.967165 systemd[1]: Started sshd@0-10.0.0.30:22-10.0.0.1:55516.service - OpenSSH per-connection server daemon (10.0.0.1:55516). Oct 9 07:13:49.006717 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 55516 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:13:49.008500 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:13:49.017879 systemd-logind[1569]: New session 1 of user core. Oct 9 07:13:49.019240 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:13:49.029209 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:13:49.041682 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:13:49.044353 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:13:49.052452 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:13:49.160109 systemd[1708]: Queued start job for default target default.target. Oct 9 07:13:49.160548 systemd[1708]: Created slice app.slice - User Application Slice. Oct 9 07:13:49.160580 systemd[1708]: Reached target paths.target - Paths. Oct 9 07:13:49.160598 systemd[1708]: Reached target timers.target - Timers. Oct 9 07:13:49.178065 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:13:49.186674 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:13:49.186744 systemd[1708]: Reached target sockets.target - Sockets. Oct 9 07:13:49.186757 systemd[1708]: Reached target basic.target - Basic System. Oct 9 07:13:49.186795 systemd[1708]: Reached target default.target - Main User Target. Oct 9 07:13:49.186828 systemd[1708]: Startup finished in 127ms. Oct 9 07:13:49.187637 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:13:49.189556 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:13:49.254251 systemd[1]: Started sshd@1-10.0.0.30:22-10.0.0.1:55532.service - OpenSSH per-connection server daemon (10.0.0.1:55532). Oct 9 07:13:49.288931 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 55532 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:13:49.290295 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:13:49.294244 systemd-logind[1569]: New session 2 of user core. Oct 9 07:13:49.304172 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:13:49.357390 sshd[1721]: pam_unix(sshd:session): session closed for user core Oct 9 07:13:49.369198 systemd[1]: Started sshd@2-10.0.0.30:22-10.0.0.1:55540.service - OpenSSH per-connection server daemon (10.0.0.1:55540). Oct 9 07:13:49.369947 systemd[1]: sshd@1-10.0.0.30:22-10.0.0.1:55532.service: Deactivated successfully. Oct 9 07:13:49.372087 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:13:49.372774 systemd-logind[1569]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:13:49.374153 systemd-logind[1569]: Removed session 2. Oct 9 07:13:49.399548 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 55540 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:13:49.400833 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:13:49.404536 systemd-logind[1569]: New session 3 of user core. Oct 9 07:13:49.419163 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:13:49.468143 sshd[1727]: pam_unix(sshd:session): session closed for user core Oct 9 07:13:49.476192 systemd[1]: Started sshd@3-10.0.0.30:22-10.0.0.1:55546.service - OpenSSH per-connection server daemon (10.0.0.1:55546). Oct 9 07:13:49.476936 systemd[1]: sshd@2-10.0.0.30:22-10.0.0.1:55540.service: Deactivated successfully. Oct 9 07:13:49.479096 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:13:49.479781 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:13:49.481188 systemd-logind[1569]: Removed session 3. Oct 9 07:13:49.506607 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 55546 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:13:49.507957 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:13:49.511704 systemd-logind[1569]: New session 4 of user core. Oct 9 07:13:49.521234 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:13:49.574188 sshd[1735]: pam_unix(sshd:session): session closed for user core Oct 9 07:13:49.585197 systemd[1]: Started sshd@4-10.0.0.30:22-10.0.0.1:55554.service - OpenSSH per-connection server daemon (10.0.0.1:55554). Oct 9 07:13:49.585708 systemd[1]: sshd@3-10.0.0.30:22-10.0.0.1:55546.service: Deactivated successfully. Oct 9 07:13:49.587480 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:13:49.588142 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:13:49.589329 systemd-logind[1569]: Removed session 4. Oct 9 07:13:49.615169 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 55554 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:13:49.616539 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:13:49.620283 systemd-logind[1569]: New session 5 of user core. Oct 9 07:13:49.632162 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:13:49.689413 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:13:49.689697 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:13:49.704564 sudo[1749]: pam_unix(sudo:session): session closed for user root Oct 9 07:13:49.706232 sshd[1742]: pam_unix(sshd:session): session closed for user core Oct 9 07:13:49.719204 systemd[1]: Started sshd@5-10.0.0.30:22-10.0.0.1:55568.service - OpenSSH per-connection server daemon (10.0.0.1:55568). Oct 9 07:13:49.719968 systemd[1]: sshd@4-10.0.0.30:22-10.0.0.1:55554.service: Deactivated successfully. Oct 9 07:13:49.722140 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:13:49.722847 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:13:49.724169 systemd-logind[1569]: Removed session 5. Oct 9 07:13:49.749973 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 55568 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:13:49.751297 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:13:49.755176 systemd-logind[1569]: New session 6 of user core. Oct 9 07:13:49.765173 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:13:49.818116 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:13:49.818485 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:13:49.821947 sudo[1759]: pam_unix(sudo:session): session closed for user root Oct 9 07:13:49.827637 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:13:49.828026 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:13:49.845129 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:13:49.846940 auditctl[1762]: No rules Oct 9 07:13:49.848215 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:13:49.848562 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:13:49.850544 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:13:49.879139 augenrules[1781]: No rules Oct 9 07:13:49.880056 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:13:49.881365 sudo[1758]: pam_unix(sudo:session): session closed for user root Oct 9 07:13:49.883335 sshd[1751]: pam_unix(sshd:session): session closed for user core Oct 9 07:13:49.894154 systemd[1]: Started sshd@6-10.0.0.30:22-10.0.0.1:55584.service - OpenSSH per-connection server daemon (10.0.0.1:55584). Oct 9 07:13:49.894605 systemd[1]: sshd@5-10.0.0.30:22-10.0.0.1:55568.service: Deactivated successfully. Oct 9 07:13:49.896648 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:13:49.898492 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:13:49.899311 systemd-logind[1569]: Removed session 6. Oct 9 07:13:49.926250 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 55584 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:13:49.927678 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:13:49.931603 systemd-logind[1569]: New session 7 of user core. Oct 9 07:13:49.942164 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:13:49.994874 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:13:49.995184 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:13:50.095112 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:13:50.095444 (dockerd)[1805]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:13:50.657603 dockerd[1805]: time="2024-10-09T07:13:50.657519907Z" level=info msg="Starting up" Oct 9 07:13:51.124144 dockerd[1805]: time="2024-10-09T07:13:51.124098542Z" level=info msg="Loading containers: start." Oct 9 07:13:51.254944 kernel: Initializing XFRM netlink socket Oct 9 07:13:51.337845 systemd-networkd[1259]: docker0: Link UP Oct 9 07:13:51.359535 dockerd[1805]: time="2024-10-09T07:13:51.359487847Z" level=info msg="Loading containers: done." Oct 9 07:13:51.493223 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck279190273-merged.mount: Deactivated successfully. Oct 9 07:13:51.496275 dockerd[1805]: time="2024-10-09T07:13:51.496229127Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:13:51.496503 dockerd[1805]: time="2024-10-09T07:13:51.496470449Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:13:51.496643 dockerd[1805]: time="2024-10-09T07:13:51.496613528Z" level=info msg="Daemon has completed initialization" Oct 9 07:13:51.533343 dockerd[1805]: time="2024-10-09T07:13:51.533271591Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:13:51.533542 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:13:51.847328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:13:51.867187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:13:52.027109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:13:52.030283 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:13:52.083144 kubelet[1951]: E1009 07:13:52.083071 1951 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:13:52.091066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:13:52.091326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:13:52.680363 containerd[1586]: time="2024-10-09T07:13:52.680311842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 07:13:53.834714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount961733483.mount: Deactivated successfully. Oct 9 07:13:55.754960 containerd[1586]: time="2024-10-09T07:13:55.754876890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:13:55.755658 containerd[1586]: time="2024-10-09T07:13:55.755594706Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 9 07:13:55.756908 containerd[1586]: time="2024-10-09T07:13:55.756876389Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:13:55.762612 containerd[1586]: time="2024-10-09T07:13:55.762584292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:13:55.763571 containerd[1586]: time="2024-10-09T07:13:55.763541847Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 3.083186664s" Oct 9 07:13:55.763621 containerd[1586]: time="2024-10-09T07:13:55.763575480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 07:13:55.790076 containerd[1586]: time="2024-10-09T07:13:55.790040442Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 07:13:57.844847 containerd[1586]: time="2024-10-09T07:13:57.844750282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:13:57.845725 containerd[1586]: time="2024-10-09T07:13:57.845642545Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 9 07:13:57.847212 containerd[1586]: time="2024-10-09T07:13:57.847170310Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:13:57.850765 containerd[1586]: time="2024-10-09T07:13:57.850717631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:13:57.851940 containerd[1586]: time="2024-10-09T07:13:57.851897453Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.06165946s" Oct 9 07:13:57.852010 containerd[1586]: time="2024-10-09T07:13:57.851946064Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 07:13:57.876326 containerd[1586]: time="2024-10-09T07:13:57.876259482Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 07:14:00.059484 containerd[1586]: time="2024-10-09T07:14:00.059417122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:00.060940 containerd[1586]: time="2024-10-09T07:14:00.060689848Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 9 07:14:00.062023 containerd[1586]: time="2024-10-09T07:14:00.061958928Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:00.071557 containerd[1586]: time="2024-10-09T07:14:00.071505538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:00.072675 containerd[1586]: time="2024-10-09T07:14:00.072620268Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 2.196306404s" Oct 9 07:14:00.072675 containerd[1586]: time="2024-10-09T07:14:00.072670161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 07:14:00.100737 containerd[1586]: time="2024-10-09T07:14:00.100679208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 07:14:01.280170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531926778.mount: Deactivated successfully. Oct 9 07:14:02.049979 containerd[1586]: time="2024-10-09T07:14:02.049876002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:02.051030 containerd[1586]: time="2024-10-09T07:14:02.050998046Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 9 07:14:02.052377 containerd[1586]: time="2024-10-09T07:14:02.052310517Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:02.054859 containerd[1586]: time="2024-10-09T07:14:02.054814922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:02.055401 containerd[1586]: time="2024-10-09T07:14:02.055355256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.954626504s" Oct 9 07:14:02.055401 containerd[1586]: time="2024-10-09T07:14:02.055390762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 07:14:02.080003 containerd[1586]: time="2024-10-09T07:14:02.079959949Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:14:02.097384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:14:02.112183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:14:02.271937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:14:02.277831 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:14:02.433833 kubelet[2076]: E1009 07:14:02.433673 2076 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:14:02.439028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:14:02.439343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:14:02.869217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028216257.mount: Deactivated successfully. Oct 9 07:14:03.762584 containerd[1586]: time="2024-10-09T07:14:03.762523085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:03.763385 containerd[1586]: time="2024-10-09T07:14:03.763347050Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:14:03.764466 containerd[1586]: time="2024-10-09T07:14:03.764436342Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:03.767098 containerd[1586]: time="2024-10-09T07:14:03.767057918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:03.768311 containerd[1586]: time="2024-10-09T07:14:03.768274919Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.68827714s" Oct 9 07:14:03.768311 containerd[1586]: time="2024-10-09T07:14:03.768303563Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:14:03.799894 containerd[1586]: time="2024-10-09T07:14:03.799854922Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:14:04.272059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222882480.mount: Deactivated successfully. Oct 9 07:14:04.282811 containerd[1586]: time="2024-10-09T07:14:04.282773795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:04.284078 containerd[1586]: time="2024-10-09T07:14:04.284044547Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 07:14:04.285460 containerd[1586]: time="2024-10-09T07:14:04.285418093Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:04.287792 containerd[1586]: time="2024-10-09T07:14:04.287759443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:04.288441 containerd[1586]: time="2024-10-09T07:14:04.288409101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 488.514745ms" Oct 9 07:14:04.288441 containerd[1586]: time="2024-10-09T07:14:04.288436933Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:14:04.312852 containerd[1586]: time="2024-10-09T07:14:04.312805484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 07:14:04.848598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194352051.mount: Deactivated successfully. Oct 9 07:14:08.393741 containerd[1586]: time="2024-10-09T07:14:08.393674232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:08.394507 containerd[1586]: time="2024-10-09T07:14:08.394453874Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 9 07:14:08.395588 containerd[1586]: time="2024-10-09T07:14:08.395546192Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:08.398216 containerd[1586]: time="2024-10-09T07:14:08.398188085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:08.399308 containerd[1586]: time="2024-10-09T07:14:08.399274923Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.086431568s" Oct 9 07:14:08.399308 containerd[1586]: time="2024-10-09T07:14:08.399306372Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 07:14:11.444324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:14:11.455125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:14:11.473493 systemd[1]: Reloading requested from client PID 2274 ('systemctl') (unit session-7.scope)... Oct 9 07:14:11.473515 systemd[1]: Reloading... Oct 9 07:14:11.544940 zram_generator::config[2311]: No configuration found. Oct 9 07:14:11.765821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:14:11.838536 systemd[1]: Reloading finished in 364 ms. Oct 9 07:14:11.881298 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:14:11.881411 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:14:11.881805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:14:11.894425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:14:12.031297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:14:12.037867 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:14:12.149025 kubelet[2371]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:14:12.149025 kubelet[2371]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:14:12.149025 kubelet[2371]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:14:12.149494 kubelet[2371]: I1009 07:14:12.149059 2371 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:14:12.733424 kubelet[2371]: I1009 07:14:12.732088 2371 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:14:12.733424 kubelet[2371]: I1009 07:14:12.732131 2371 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:14:12.733424 kubelet[2371]: I1009 07:14:12.732492 2371 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:14:12.748337 kubelet[2371]: E1009 07:14:12.748290 2371 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:12.750380 kubelet[2371]: I1009 07:14:12.750357 2371 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:14:12.760137 kubelet[2371]: I1009 07:14:12.760101 2371 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:14:12.761086 kubelet[2371]: I1009 07:14:12.761058 2371 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:14:12.761272 kubelet[2371]: I1009 07:14:12.761235 2371 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:14:12.761364 kubelet[2371]: I1009 07:14:12.761274 2371 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:14:12.761364 kubelet[2371]: I1009 07:14:12.761285 2371 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:14:12.761438 kubelet[2371]: I1009 07:14:12.761418 2371 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:14:12.761545 kubelet[2371]: I1009 07:14:12.761521 2371 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:14:12.761545 kubelet[2371]: I1009 07:14:12.761539 2371 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:14:12.761601 kubelet[2371]: I1009 07:14:12.761574 2371 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:14:12.761601 kubelet[2371]: I1009 07:14:12.761593 2371 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:14:12.762865 kubelet[2371]: I1009 07:14:12.762759 2371 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:14:12.763788 kubelet[2371]: W1009 07:14:12.763586 2371 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:12.763788 kubelet[2371]: E1009 07:14:12.763631 2371 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:12.763788 kubelet[2371]: W1009 07:14:12.763701 2371 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:12.763788 kubelet[2371]: E1009 07:14:12.763755 2371 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:12.765897 kubelet[2371]: I1009 07:14:12.765828 2371 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:14:12.766771 kubelet[2371]: W1009 07:14:12.766753 2371 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:14:12.767882 kubelet[2371]: I1009 07:14:12.767859 2371 server.go:1256] "Started kubelet" Oct 9 07:14:12.768098 kubelet[2371]: I1009 07:14:12.768070 2371 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:14:12.768185 kubelet[2371]: I1009 07:14:12.768161 2371 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:14:12.768717 kubelet[2371]: I1009 07:14:12.768700 2371 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:14:12.769627 kubelet[2371]: I1009 07:14:12.769609 2371 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:14:12.770387 kubelet[2371]: I1009 07:14:12.770354 2371 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:14:12.772440 kubelet[2371]: I1009 07:14:12.772424 2371 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:14:12.772793 kubelet[2371]: I1009 07:14:12.772772 2371 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:14:12.772859 kubelet[2371]: I1009 07:14:12.772845 2371 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:14:12.773377 kubelet[2371]: W1009 07:14:12.773161 2371 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:12.774125 kubelet[2371]: E1009 07:14:12.774106 2371 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:12.774967 kubelet[2371]: E1009 07:14:12.774906 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="200ms" Oct 9 07:14:12.775198 kubelet[2371]: I1009 07:14:12.775181 2371 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:14:12.775299 kubelet[2371]: I1009 07:14:12.775266 2371 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:14:12.775961 kubelet[2371]: E1009 07:14:12.775907 2371 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fcb770f463ebf8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 07:14:12.767837176 +0000 UTC m=+0.725493411,LastTimestamp:2024-10-09 07:14:12.767837176 +0000 UTC m=+0.725493411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 07:14:12.776673 kubelet[2371]: I1009 07:14:12.776656 2371 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:14:12.777343 kubelet[2371]: E1009 07:14:12.776868 2371 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:14:12.790608 kubelet[2371]: I1009 07:14:12.790562 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:14:12.792457 kubelet[2371]: I1009 07:14:12.792103 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:14:12.792457 kubelet[2371]: I1009 07:14:12.792129 2371 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:14:12.792457 kubelet[2371]: I1009 07:14:12.792146 2371 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:14:12.792457 kubelet[2371]: E1009 07:14:12.792196 2371 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:14:12.793476 kubelet[2371]: W1009 07:14:12.793444 2371 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:12.793607 kubelet[2371]: E1009 07:14:12.793562 2371 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:12.799589 kubelet[2371]: I1009 07:14:12.799564 2371 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:14:12.799589 kubelet[2371]: I1009 07:14:12.799586 2371 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:14:12.799658 kubelet[2371]: I1009 07:14:12.799601 2371 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:14:12.874308 kubelet[2371]: I1009 07:14:12.874273 2371 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:14:12.874735 kubelet[2371]: E1009 07:14:12.874696 2371 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Oct 9 07:14:12.892908 kubelet[2371]: E1009 07:14:12.892854 2371 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:14:12.975691 kubelet[2371]: E1009 07:14:12.975650 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="400ms" Oct 9 07:14:13.076318 kubelet[2371]: I1009 07:14:13.076185 2371 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:14:13.076641 kubelet[2371]: E1009 07:14:13.076602 2371 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Oct 9 07:14:13.093765 kubelet[2371]: E1009 07:14:13.093735 2371 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:14:13.137444 kubelet[2371]: I1009 07:14:13.137422 2371 policy_none.go:49] "None policy: Start" Oct 9 07:14:13.138227 kubelet[2371]: I1009 07:14:13.138195 2371 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:14:13.138227 kubelet[2371]: I1009 07:14:13.138226 2371 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:14:13.146593 kubelet[2371]: I1009 07:14:13.146562 2371 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:14:13.146835 kubelet[2371]: I1009 07:14:13.146821 2371 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:14:13.150014 kubelet[2371]: E1009 07:14:13.149979 2371 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 07:14:13.376270 kubelet[2371]: E1009 07:14:13.376213 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="800ms" Oct 9 07:14:13.479022 kubelet[2371]: I1009 07:14:13.478962 2371 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:14:13.479293 kubelet[2371]: E1009 07:14:13.479276 2371 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Oct 9 07:14:13.494497 kubelet[2371]: I1009 07:14:13.494468 2371 topology_manager.go:215] "Topology Admit Handler" podUID="93ce0992c26a8f5b20e16d87c90efe36" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 07:14:13.495624 kubelet[2371]: I1009 07:14:13.495601 2371 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 07:14:13.496469 kubelet[2371]: I1009 07:14:13.496432 2371 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 07:14:13.577523 kubelet[2371]: I1009 07:14:13.577490 2371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:13.577523 kubelet[2371]: I1009 07:14:13.577527 2371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:13.577663 kubelet[2371]: I1009 07:14:13.577551 2371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:13.577663 kubelet[2371]: I1009 07:14:13.577571 2371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93ce0992c26a8f5b20e16d87c90efe36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"93ce0992c26a8f5b20e16d87c90efe36\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:14:13.577663 kubelet[2371]: I1009 07:14:13.577591 2371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:13.577663 kubelet[2371]: I1009 07:14:13.577609 2371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:13.577663 kubelet[2371]: I1009 07:14:13.577630 2371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 07:14:13.577814 kubelet[2371]: I1009 07:14:13.577764 2371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93ce0992c26a8f5b20e16d87c90efe36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"93ce0992c26a8f5b20e16d87c90efe36\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:14:13.577858 kubelet[2371]: I1009 07:14:13.577841 2371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93ce0992c26a8f5b20e16d87c90efe36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"93ce0992c26a8f5b20e16d87c90efe36\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:14:13.667287 kubelet[2371]: W1009 07:14:13.667122 2371 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:13.667287 kubelet[2371]: E1009 07:14:13.667183 2371 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:13.800456 kubelet[2371]: E1009 07:14:13.800409 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:13.801115 containerd[1586]: time="2024-10-09T07:14:13.801075509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:93ce0992c26a8f5b20e16d87c90efe36,Namespace:kube-system,Attempt:0,}" Oct 9 07:14:13.801540 containerd[1586]: time="2024-10-09T07:14:13.801508847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 9 07:14:13.801571 kubelet[2371]: E1009 07:14:13.801136 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:13.803022 kubelet[2371]: E1009 07:14:13.802989 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:13.803954 containerd[1586]: time="2024-10-09T07:14:13.803398011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 9 07:14:13.829088 kubelet[2371]: W1009 07:14:13.829029 2371 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:13.829088 kubelet[2371]: E1009 07:14:13.829088 2371 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:14.177416 kubelet[2371]: E1009 07:14:14.177376 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="1.6s" Oct 9 07:14:14.180869 kubelet[2371]: W1009 07:14:14.180816 2371 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:14.180869 kubelet[2371]: E1009 07:14:14.180869 2371 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:14.244593 kubelet[2371]: W1009 07:14:14.244493 2371 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:14.244593 kubelet[2371]: E1009 07:14:14.244564 2371 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:14.280910 kubelet[2371]: I1009 07:14:14.280890 2371 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:14:14.281255 kubelet[2371]: E1009 07:14:14.281222 2371 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Oct 9 07:14:14.817241 kubelet[2371]: E1009 07:14:14.817189 2371 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.30:6443: connect: connection refused Oct 9 07:14:15.004302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138640659.mount: Deactivated successfully. Oct 9 07:14:15.010116 containerd[1586]: time="2024-10-09T07:14:15.010079862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:14:15.011118 containerd[1586]: time="2024-10-09T07:14:15.011080440Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:14:15.012022 containerd[1586]: time="2024-10-09T07:14:15.011980976Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:14:15.012866 containerd[1586]: time="2024-10-09T07:14:15.012813620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:14:15.013751 containerd[1586]: time="2024-10-09T07:14:15.013688014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:14:15.014525 containerd[1586]: time="2024-10-09T07:14:15.014495811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:14:15.015607 containerd[1586]: time="2024-10-09T07:14:15.015579368Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:14:15.019516 containerd[1586]: time="2024-10-09T07:14:15.019460847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:14:15.020276 containerd[1586]: time="2024-10-09T07:14:15.020245999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.21907204s" Oct 9 07:14:15.021443 containerd[1586]: time="2024-10-09T07:14:15.021415703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.219813335s" Oct 9 07:14:15.022649 containerd[1586]: time="2024-10-09T07:14:15.022618330Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.219161757s" Oct 9 07:14:15.243185 containerd[1586]: time="2024-10-09T07:14:15.243003035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:14:15.243185 containerd[1586]: time="2024-10-09T07:14:15.243073061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:15.243185 containerd[1586]: time="2024-10-09T07:14:15.243091126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:14:15.243185 containerd[1586]: time="2024-10-09T07:14:15.243104081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:15.243185 containerd[1586]: time="2024-10-09T07:14:15.242847105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:14:15.243185 containerd[1586]: time="2024-10-09T07:14:15.242944883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:15.243185 containerd[1586]: time="2024-10-09T07:14:15.242976545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:14:15.243185 containerd[1586]: time="2024-10-09T07:14:15.242990833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:15.245950 containerd[1586]: time="2024-10-09T07:14:15.245476160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:14:15.245950 containerd[1586]: time="2024-10-09T07:14:15.245516698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:15.245950 containerd[1586]: time="2024-10-09T07:14:15.245543601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:14:15.245950 containerd[1586]: time="2024-10-09T07:14:15.245553920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:15.337202 containerd[1586]: time="2024-10-09T07:14:15.337154824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:93ce0992c26a8f5b20e16d87c90efe36,Namespace:kube-system,Attempt:0,} returns sandbox id \"7786f68148bfe66c4b371a611900c05bdd7522fd2d4e029af25260a21b32dfd9\"" Oct 9 07:14:15.339239 kubelet[2371]: E1009 07:14:15.338740 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:15.340106 containerd[1586]: time="2024-10-09T07:14:15.340081152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea72ea77bb4cbed1bb489d0f151f5b7a0ab0c50d57d074205033a2ebc142f59d\"" Oct 9 07:14:15.340207 containerd[1586]: time="2024-10-09T07:14:15.340176787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"812826cdb0df30f4f4ee77002eaaf3ed2dc5719202aab55844001ed293f4803a\"" Oct 9 07:14:15.340862 kubelet[2371]: E1009 07:14:15.340843 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:15.341029 kubelet[2371]: E1009 07:14:15.341012 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:15.343255 containerd[1586]: time="2024-10-09T07:14:15.343217124Z" level=info msg="CreateContainer within sandbox \"7786f68148bfe66c4b371a611900c05bdd7522fd2d4e029af25260a21b32dfd9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:14:15.343366 containerd[1586]: time="2024-10-09T07:14:15.343334280Z" level=info msg="CreateContainer within sandbox \"812826cdb0df30f4f4ee77002eaaf3ed2dc5719202aab55844001ed293f4803a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:14:15.343535 containerd[1586]: time="2024-10-09T07:14:15.343515148Z" level=info msg="CreateContainer within sandbox \"ea72ea77bb4cbed1bb489d0f151f5b7a0ab0c50d57d074205033a2ebc142f59d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:14:15.364493 containerd[1586]: time="2024-10-09T07:14:15.364458889Z" level=info msg="CreateContainer within sandbox \"7786f68148bfe66c4b371a611900c05bdd7522fd2d4e029af25260a21b32dfd9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ab6452b7f9ac1b4fd8d0d17f6b690c567fedc095793d879b3828fe94d909347\"" Oct 9 07:14:15.364882 containerd[1586]: time="2024-10-09T07:14:15.364855744Z" level=info msg="StartContainer for \"2ab6452b7f9ac1b4fd8d0d17f6b690c567fedc095793d879b3828fe94d909347\"" Oct 9 07:14:15.373179 containerd[1586]: time="2024-10-09T07:14:15.373115207Z" level=info msg="CreateContainer within sandbox \"ea72ea77bb4cbed1bb489d0f151f5b7a0ab0c50d57d074205033a2ebc142f59d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61469d0119218ac5743e631107fa56861cab80be1435a826487a51f7869efd5c\"" Oct 9 07:14:15.373823 containerd[1586]: time="2024-10-09T07:14:15.373792742Z" level=info msg="StartContainer for \"61469d0119218ac5743e631107fa56861cab80be1435a826487a51f7869efd5c\"" Oct 9 07:14:15.376718 containerd[1586]: time="2024-10-09T07:14:15.376630740Z" level=info msg="CreateContainer within sandbox \"812826cdb0df30f4f4ee77002eaaf3ed2dc5719202aab55844001ed293f4803a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ad8c41777b6678cd9c7055c471d1b01b9dc485705367414d5bd5a6ea39f9986f\"" Oct 9 07:14:15.378001 containerd[1586]: time="2024-10-09T07:14:15.377115124Z" level=info msg="StartContainer for \"ad8c41777b6678cd9c7055c471d1b01b9dc485705367414d5bd5a6ea39f9986f\"" Oct 9 07:14:15.440214 containerd[1586]: time="2024-10-09T07:14:15.440171202Z" level=info msg="StartContainer for \"2ab6452b7f9ac1b4fd8d0d17f6b690c567fedc095793d879b3828fe94d909347\" returns successfully" Oct 9 07:14:15.465787 containerd[1586]: time="2024-10-09T07:14:15.465532203Z" level=info msg="StartContainer for \"61469d0119218ac5743e631107fa56861cab80be1435a826487a51f7869efd5c\" returns successfully" Oct 9 07:14:15.465787 containerd[1586]: time="2024-10-09T07:14:15.465595666Z" level=info msg="StartContainer for \"ad8c41777b6678cd9c7055c471d1b01b9dc485705367414d5bd5a6ea39f9986f\" returns successfully" Oct 9 07:14:15.803394 kubelet[2371]: E1009 07:14:15.803356 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:15.808197 kubelet[2371]: E1009 07:14:15.807863 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:15.808859 kubelet[2371]: E1009 07:14:15.808836 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:15.883588 kubelet[2371]: I1009 07:14:15.883557 2371 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:14:16.235949 kubelet[2371]: E1009 07:14:16.234890 2371 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 07:14:16.317470 kubelet[2371]: I1009 07:14:16.317429 2371 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 07:14:16.763694 kubelet[2371]: I1009 07:14:16.763653 2371 apiserver.go:52] "Watching apiserver" Oct 9 07:14:16.773843 kubelet[2371]: I1009 07:14:16.773797 2371 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:14:16.812671 kubelet[2371]: E1009 07:14:16.812627 2371 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:16.812671 kubelet[2371]: E1009 07:14:16.812639 2371 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 9 07:14:16.813087 kubelet[2371]: E1009 07:14:16.813065 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:16.813087 kubelet[2371]: E1009 07:14:16.813086 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:19.255117 systemd[1]: Reloading requested from client PID 2648 ('systemctl') (unit session-7.scope)... Oct 9 07:14:19.255136 systemd[1]: Reloading... Oct 9 07:14:19.329997 zram_generator::config[2689]: No configuration found. Oct 9 07:14:19.443932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:14:19.521535 systemd[1]: Reloading finished in 266 ms. Oct 9 07:14:19.558704 kubelet[2371]: I1009 07:14:19.558618 2371 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:14:19.558696 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:14:19.576551 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:14:19.577068 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:14:19.592112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:14:19.735256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:14:19.740400 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:14:19.788151 kubelet[2740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:14:19.788151 kubelet[2740]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:14:19.788151 kubelet[2740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:14:19.788151 kubelet[2740]: I1009 07:14:19.788113 2740 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:14:19.793400 kubelet[2740]: I1009 07:14:19.793341 2740 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:14:19.793400 kubelet[2740]: I1009 07:14:19.793374 2740 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:14:19.794062 kubelet[2740]: I1009 07:14:19.793696 2740 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:14:19.796968 kubelet[2740]: I1009 07:14:19.796946 2740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:14:19.801425 kubelet[2740]: I1009 07:14:19.801369 2740 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:14:19.811060 kubelet[2740]: I1009 07:14:19.811022 2740 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:14:19.811713 kubelet[2740]: I1009 07:14:19.811696 2740 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:14:19.811946 kubelet[2740]: I1009 07:14:19.811909 2740 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:14:19.812062 kubelet[2740]: I1009 07:14:19.811959 2740 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:14:19.812062 kubelet[2740]: I1009 07:14:19.811970 2740 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:14:19.812062 kubelet[2740]: I1009 07:14:19.812001 2740 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:14:19.812134 kubelet[2740]: I1009 07:14:19.812122 2740 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:14:19.812157 kubelet[2740]: I1009 07:14:19.812138 2740 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:14:19.812178 kubelet[2740]: I1009 07:14:19.812165 2740 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:14:19.812178 kubelet[2740]: I1009 07:14:19.812177 2740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:14:19.814594 kubelet[2740]: I1009 07:14:19.814263 2740 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:14:19.814594 kubelet[2740]: I1009 07:14:19.814565 2740 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:14:19.815324 kubelet[2740]: I1009 07:14:19.815025 2740 server.go:1256] "Started kubelet" Oct 9 07:14:19.815433 kubelet[2740]: I1009 07:14:19.815392 2740 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:14:19.816785 kubelet[2740]: I1009 07:14:19.816306 2740 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:14:19.819684 kubelet[2740]: I1009 07:14:19.818998 2740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:14:19.824221 kubelet[2740]: I1009 07:14:19.824127 2740 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:14:19.826534 kubelet[2740]: I1009 07:14:19.824543 2740 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:14:19.826534 kubelet[2740]: I1009 07:14:19.824889 2740 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:14:19.828055 kubelet[2740]: I1009 07:14:19.828008 2740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:14:19.828474 kubelet[2740]: I1009 07:14:19.828286 2740 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:14:19.830208 kubelet[2740]: I1009 07:14:19.830190 2740 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:14:19.831032 kubelet[2740]: I1009 07:14:19.831011 2740 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:14:19.834764 kubelet[2740]: I1009 07:14:19.834326 2740 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:14:19.837029 kubelet[2740]: I1009 07:14:19.836997 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:14:19.838278 kubelet[2740]: I1009 07:14:19.838257 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:14:19.838332 kubelet[2740]: I1009 07:14:19.838282 2740 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:14:19.838332 kubelet[2740]: I1009 07:14:19.838302 2740 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:14:19.838406 kubelet[2740]: E1009 07:14:19.838348 2740 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:14:19.845855 kubelet[2740]: E1009 07:14:19.845819 2740 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:14:19.893099 kubelet[2740]: I1009 07:14:19.892794 2740 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:14:19.893099 kubelet[2740]: I1009 07:14:19.892815 2740 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:14:19.893099 kubelet[2740]: I1009 07:14:19.892833 2740 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:14:19.893099 kubelet[2740]: I1009 07:14:19.892984 2740 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:14:19.893099 kubelet[2740]: I1009 07:14:19.893003 2740 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:14:19.893099 kubelet[2740]: I1009 07:14:19.893010 2740 policy_none.go:49] "None policy: Start" Oct 9 07:14:19.893504 kubelet[2740]: I1009 07:14:19.893429 2740 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:14:19.893504 kubelet[2740]: I1009 07:14:19.893447 2740 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:14:19.893610 kubelet[2740]: I1009 07:14:19.893593 2740 state_mem.go:75] "Updated machine memory state" Oct 9 07:14:19.895232 kubelet[2740]: I1009 07:14:19.895193 2740 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:14:19.895954 kubelet[2740]: I1009 07:14:19.895872 2740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:14:19.939316 kubelet[2740]: I1009 07:14:19.939273 2740 topology_manager.go:215] "Topology Admit Handler" podUID="93ce0992c26a8f5b20e16d87c90efe36" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 07:14:19.939462 kubelet[2740]: I1009 07:14:19.939348 2740 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 07:14:19.939462 kubelet[2740]: I1009 07:14:19.939411 2740 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 07:14:20.001619 kubelet[2740]: I1009 07:14:20.001577 2740 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:14:20.008310 kubelet[2740]: I1009 07:14:20.008274 2740 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 07:14:20.008463 kubelet[2740]: I1009 07:14:20.008358 2740 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 07:14:20.029941 kubelet[2740]: I1009 07:14:20.025270 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:20.029941 kubelet[2740]: I1009 07:14:20.025305 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:20.029941 kubelet[2740]: I1009 07:14:20.025324 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:20.029941 kubelet[2740]: I1009 07:14:20.025348 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 07:14:20.029941 kubelet[2740]: I1009 07:14:20.025366 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93ce0992c26a8f5b20e16d87c90efe36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"93ce0992c26a8f5b20e16d87c90efe36\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:14:20.030194 kubelet[2740]: I1009 07:14:20.025390 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:20.030194 kubelet[2740]: I1009 07:14:20.025406 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93ce0992c26a8f5b20e16d87c90efe36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"93ce0992c26a8f5b20e16d87c90efe36\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:14:20.030194 kubelet[2740]: I1009 07:14:20.025422 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93ce0992c26a8f5b20e16d87c90efe36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"93ce0992c26a8f5b20e16d87c90efe36\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:14:20.030194 kubelet[2740]: I1009 07:14:20.025444 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:14:20.246591 kubelet[2740]: E1009 07:14:20.246542 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:20.246591 kubelet[2740]: E1009 07:14:20.246551 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:20.247029 kubelet[2740]: E1009 07:14:20.247006 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:20.815112 kubelet[2740]: I1009 07:14:20.815063 2740 apiserver.go:52] "Watching apiserver" Oct 9 07:14:20.824925 kubelet[2740]: I1009 07:14:20.824870 2740 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:14:20.853494 kubelet[2740]: E1009 07:14:20.853458 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:20.854760 kubelet[2740]: E1009 07:14:20.853617 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:20.862141 kubelet[2740]: E1009 07:14:20.861994 2740 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 07:14:20.862464 kubelet[2740]: E1009 07:14:20.862446 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:20.870992 kubelet[2740]: I1009 07:14:20.870937 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.870869149 podStartE2EDuration="1.870869149s" podCreationTimestamp="2024-10-09 07:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:14:20.870741324 +0000 UTC m=+1.126075739" watchObservedRunningTime="2024-10-09 07:14:20.870869149 +0000 UTC m=+1.126203564" Oct 9 07:14:20.879120 kubelet[2740]: I1009 07:14:20.878654 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8786110950000001 podStartE2EDuration="1.878611095s" podCreationTimestamp="2024-10-09 07:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:14:20.877413114 +0000 UTC m=+1.132747529" watchObservedRunningTime="2024-10-09 07:14:20.878611095 +0000 UTC m=+1.133945510" Oct 9 07:14:20.896629 kubelet[2740]: I1009 07:14:20.896591 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8965558329999999 podStartE2EDuration="1.896555833s" podCreationTimestamp="2024-10-09 07:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:14:20.889412041 +0000 UTC m=+1.144746456" watchObservedRunningTime="2024-10-09 07:14:20.896555833 +0000 UTC m=+1.151890248" Oct 9 07:14:21.854250 kubelet[2740]: E1009 07:14:21.854210 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:22.807262 kubelet[2740]: E1009 07:14:22.807230 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:24.195847 kubelet[2740]: E1009 07:14:24.195816 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:24.200767 sudo[1794]: pam_unix(sudo:session): session closed for user root Oct 9 07:14:24.202649 sshd[1787]: pam_unix(sshd:session): session closed for user core Oct 9 07:14:24.207041 systemd[1]: sshd@6-10.0.0.30:22-10.0.0.1:55584.service: Deactivated successfully. Oct 9 07:14:24.209676 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:14:24.210475 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:14:24.211394 systemd-logind[1569]: Removed session 7. Oct 9 07:14:25.673049 update_engine[1573]: I1009 07:14:25.672994 1573 update_attempter.cc:509] Updating boot flags... Oct 9 07:14:25.699957 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2834) Oct 9 07:14:25.734959 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2835) Oct 9 07:14:25.769948 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2835) Oct 9 07:14:30.450165 kubelet[2740]: E1009 07:14:30.450119 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:32.811172 kubelet[2740]: E1009 07:14:32.811126 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:34.002611 kubelet[2740]: I1009 07:14:34.002567 2740 topology_manager.go:215] "Topology Admit Handler" podUID="defb8b27-53c1-4433-a9f1-bb4b775b7eed" podNamespace="kube-system" podName="kube-proxy-2xng5" Oct 9 07:14:34.018482 kubelet[2740]: I1009 07:14:34.018432 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/defb8b27-53c1-4433-a9f1-bb4b775b7eed-kube-proxy\") pod \"kube-proxy-2xng5\" (UID: \"defb8b27-53c1-4433-a9f1-bb4b775b7eed\") " pod="kube-system/kube-proxy-2xng5" Oct 9 07:14:34.018482 kubelet[2740]: I1009 07:14:34.018476 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/defb8b27-53c1-4433-a9f1-bb4b775b7eed-xtables-lock\") pod \"kube-proxy-2xng5\" (UID: \"defb8b27-53c1-4433-a9f1-bb4b775b7eed\") " pod="kube-system/kube-proxy-2xng5" Oct 9 07:14:34.018482 kubelet[2740]: I1009 07:14:34.018504 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/defb8b27-53c1-4433-a9f1-bb4b775b7eed-lib-modules\") pod \"kube-proxy-2xng5\" (UID: \"defb8b27-53c1-4433-a9f1-bb4b775b7eed\") " pod="kube-system/kube-proxy-2xng5" Oct 9 07:14:34.018730 kubelet[2740]: I1009 07:14:34.018530 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkd8w\" (UniqueName: \"kubernetes.io/projected/defb8b27-53c1-4433-a9f1-bb4b775b7eed-kube-api-access-lkd8w\") pod \"kube-proxy-2xng5\" (UID: \"defb8b27-53c1-4433-a9f1-bb4b775b7eed\") " pod="kube-system/kube-proxy-2xng5" Oct 9 07:14:34.021744 kubelet[2740]: I1009 07:14:34.021716 2740 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:14:34.024127 containerd[1586]: time="2024-10-09T07:14:34.024069329Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:14:34.027230 kubelet[2740]: I1009 07:14:34.024328 2740 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:14:34.124066 kubelet[2740]: E1009 07:14:34.124021 2740 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 9 07:14:34.124066 kubelet[2740]: E1009 07:14:34.124055 2740 projected.go:200] Error preparing data for projected volume kube-api-access-lkd8w for pod kube-system/kube-proxy-2xng5: configmap "kube-root-ca.crt" not found Oct 9 07:14:34.124225 kubelet[2740]: E1009 07:14:34.124117 2740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/defb8b27-53c1-4433-a9f1-bb4b775b7eed-kube-api-access-lkd8w podName:defb8b27-53c1-4433-a9f1-bb4b775b7eed nodeName:}" failed. No retries permitted until 2024-10-09 07:14:34.624094162 +0000 UTC m=+14.879428577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lkd8w" (UniqueName: "kubernetes.io/projected/defb8b27-53c1-4433-a9f1-bb4b775b7eed-kube-api-access-lkd8w") pod "kube-proxy-2xng5" (UID: "defb8b27-53c1-4433-a9f1-bb4b775b7eed") : configmap "kube-root-ca.crt" not found Oct 9 07:14:34.202366 kubelet[2740]: E1009 07:14:34.202302 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:34.834546 kubelet[2740]: I1009 07:14:34.834490 2740 topology_manager.go:215] "Topology Admit Handler" podUID="52164d96-373a-4640-be7f-48bb5f0b1e93" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-tmczk" Oct 9 07:14:34.922310 kubelet[2740]: E1009 07:14:34.922260 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:34.922810 containerd[1586]: time="2024-10-09T07:14:34.922766348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xng5,Uid:defb8b27-53c1-4433-a9f1-bb4b775b7eed,Namespace:kube-system,Attempt:0,}" Oct 9 07:14:34.924476 kubelet[2740]: I1009 07:14:34.924435 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52164d96-373a-4640-be7f-48bb5f0b1e93-var-lib-calico\") pod \"tigera-operator-5d56685c77-tmczk\" (UID: \"52164d96-373a-4640-be7f-48bb5f0b1e93\") " pod="tigera-operator/tigera-operator-5d56685c77-tmczk" Oct 9 07:14:34.924476 kubelet[2740]: I1009 07:14:34.924468 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwbgt\" (UniqueName: \"kubernetes.io/projected/52164d96-373a-4640-be7f-48bb5f0b1e93-kube-api-access-wwbgt\") pod \"tigera-operator-5d56685c77-tmczk\" (UID: \"52164d96-373a-4640-be7f-48bb5f0b1e93\") " pod="tigera-operator/tigera-operator-5d56685c77-tmczk" Oct 9 07:14:34.949228 containerd[1586]: time="2024-10-09T07:14:34.948876530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:14:34.949228 containerd[1586]: time="2024-10-09T07:14:34.949004813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:34.949228 containerd[1586]: time="2024-10-09T07:14:34.949035771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:14:34.949228 containerd[1586]: time="2024-10-09T07:14:34.949047513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:34.992860 containerd[1586]: time="2024-10-09T07:14:34.992813778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xng5,Uid:defb8b27-53c1-4433-a9f1-bb4b775b7eed,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc2d20942be486de914cd82c31f78f070f435634fc493749be8e36eb1e86ae77\"" Oct 9 07:14:34.993558 kubelet[2740]: E1009 07:14:34.993539 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:34.995204 containerd[1586]: time="2024-10-09T07:14:34.995155245Z" level=info msg="CreateContainer within sandbox \"cc2d20942be486de914cd82c31f78f070f435634fc493749be8e36eb1e86ae77\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:14:35.010222 containerd[1586]: time="2024-10-09T07:14:35.010182229Z" level=info msg="CreateContainer within sandbox \"cc2d20942be486de914cd82c31f78f070f435634fc493749be8e36eb1e86ae77\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"77136d778dc3015d8e235b30861e329ddfa23f1a37be2fc1e626f37224fe0ac5\"" Oct 9 07:14:35.010623 containerd[1586]: time="2024-10-09T07:14:35.010592905Z" level=info msg="StartContainer for \"77136d778dc3015d8e235b30861e329ddfa23f1a37be2fc1e626f37224fe0ac5\"" Oct 9 07:14:35.071018 containerd[1586]: time="2024-10-09T07:14:35.070965801Z" level=info msg="StartContainer for \"77136d778dc3015d8e235b30861e329ddfa23f1a37be2fc1e626f37224fe0ac5\" returns successfully" Oct 9 07:14:35.139931 containerd[1586]: time="2024-10-09T07:14:35.139882449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-tmczk,Uid:52164d96-373a-4640-be7f-48bb5f0b1e93,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:14:35.164476 containerd[1586]: time="2024-10-09T07:14:35.164372677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:14:35.164476 containerd[1586]: time="2024-10-09T07:14:35.164416670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:35.164476 containerd[1586]: time="2024-10-09T07:14:35.164433412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:14:35.164476 containerd[1586]: time="2024-10-09T07:14:35.164442218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:35.213789 containerd[1586]: time="2024-10-09T07:14:35.213734422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-tmczk,Uid:52164d96-373a-4640-be7f-48bb5f0b1e93,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2ceccb7dc0553ac41f276b9b1d130ed318a2df0b49d78c585b2896b83b161aba\"" Oct 9 07:14:35.215167 containerd[1586]: time="2024-10-09T07:14:35.215140207Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:14:35.873813 kubelet[2740]: E1009 07:14:35.873762 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:35.880243 kubelet[2740]: I1009 07:14:35.880202 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2xng5" podStartSLOduration=2.880147465 podStartE2EDuration="2.880147465s" podCreationTimestamp="2024-10-09 07:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:14:35.87964176 +0000 UTC m=+16.134976175" watchObservedRunningTime="2024-10-09 07:14:35.880147465 +0000 UTC m=+16.135481880" Oct 9 07:14:36.554037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583510252.mount: Deactivated successfully. Oct 9 07:14:36.891579 containerd[1586]: time="2024-10-09T07:14:36.891534246Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:36.892680 containerd[1586]: time="2024-10-09T07:14:36.892647428Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136541" Oct 9 07:14:36.894027 containerd[1586]: time="2024-10-09T07:14:36.893968712Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:36.896177 containerd[1586]: time="2024-10-09T07:14:36.896146535Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:36.896799 containerd[1586]: time="2024-10-09T07:14:36.896757048Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.681582415s" Oct 9 07:14:36.896799 containerd[1586]: time="2024-10-09T07:14:36.896796853Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:14:36.898316 containerd[1586]: time="2024-10-09T07:14:36.898270326Z" level=info msg="CreateContainer within sandbox \"2ceccb7dc0553ac41f276b9b1d130ed318a2df0b49d78c585b2896b83b161aba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:14:36.909594 containerd[1586]: time="2024-10-09T07:14:36.909546769Z" level=info msg="CreateContainer within sandbox \"2ceccb7dc0553ac41f276b9b1d130ed318a2df0b49d78c585b2896b83b161aba\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"92956b646b14be01ba1e283fb371f67d0544e46b0957e7a29e9715063e7e1dbd\"" Oct 9 07:14:36.909985 containerd[1586]: time="2024-10-09T07:14:36.909958356Z" level=info msg="StartContainer for \"92956b646b14be01ba1e283fb371f67d0544e46b0957e7a29e9715063e7e1dbd\"" Oct 9 07:14:36.968443 containerd[1586]: time="2024-10-09T07:14:36.968399242Z" level=info msg="StartContainer for \"92956b646b14be01ba1e283fb371f67d0544e46b0957e7a29e9715063e7e1dbd\" returns successfully" Oct 9 07:14:39.718477 kubelet[2740]: I1009 07:14:39.718415 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-tmczk" podStartSLOduration=4.035922224 podStartE2EDuration="5.718337603s" podCreationTimestamp="2024-10-09 07:14:34 +0000 UTC" firstStartedPulling="2024-10-09 07:14:35.214614754 +0000 UTC m=+15.469949169" lastFinishedPulling="2024-10-09 07:14:36.897030133 +0000 UTC m=+17.152364548" observedRunningTime="2024-10-09 07:14:37.884240127 +0000 UTC m=+18.139574542" watchObservedRunningTime="2024-10-09 07:14:39.718337603 +0000 UTC m=+19.973672018" Oct 9 07:14:39.719182 kubelet[2740]: I1009 07:14:39.718572 2740 topology_manager.go:215] "Topology Admit Handler" podUID="fcdd51dc-8690-43f8-85f2-8364e9ff7980" podNamespace="calico-system" podName="calico-typha-7d97dbc964-r7blq" Oct 9 07:14:39.756613 kubelet[2740]: I1009 07:14:39.756562 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcdd51dc-8690-43f8-85f2-8364e9ff7980-tigera-ca-bundle\") pod \"calico-typha-7d97dbc964-r7blq\" (UID: \"fcdd51dc-8690-43f8-85f2-8364e9ff7980\") " pod="calico-system/calico-typha-7d97dbc964-r7blq" Oct 9 07:14:39.756613 kubelet[2740]: I1009 07:14:39.756601 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fcdd51dc-8690-43f8-85f2-8364e9ff7980-typha-certs\") pod \"calico-typha-7d97dbc964-r7blq\" (UID: \"fcdd51dc-8690-43f8-85f2-8364e9ff7980\") " pod="calico-system/calico-typha-7d97dbc964-r7blq" Oct 9 07:14:39.756613 kubelet[2740]: I1009 07:14:39.756628 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9c64\" (UniqueName: \"kubernetes.io/projected/fcdd51dc-8690-43f8-85f2-8364e9ff7980-kube-api-access-l9c64\") pod \"calico-typha-7d97dbc964-r7blq\" (UID: \"fcdd51dc-8690-43f8-85f2-8364e9ff7980\") " pod="calico-system/calico-typha-7d97dbc964-r7blq" Oct 9 07:14:39.762615 kubelet[2740]: I1009 07:14:39.762576 2740 topology_manager.go:215] "Topology Admit Handler" podUID="124fdd79-53a3-41c1-a1fc-b5eb7ae269fd" podNamespace="calico-system" podName="calico-node-9kxxf" Oct 9 07:14:39.857153 kubelet[2740]: I1009 07:14:39.857112 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-tigera-ca-bundle\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.857302 kubelet[2740]: I1009 07:14:39.857157 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-node-certs\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.857364 kubelet[2740]: I1009 07:14:39.857299 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-xtables-lock\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.857399 kubelet[2740]: I1009 07:14:39.857381 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-cni-log-dir\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.858608 kubelet[2740]: I1009 07:14:39.858030 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-cni-bin-dir\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.858608 kubelet[2740]: I1009 07:14:39.858133 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hvjr\" (UniqueName: \"kubernetes.io/projected/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-kube-api-access-6hvjr\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.858608 kubelet[2740]: I1009 07:14:39.858174 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-lib-modules\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.858608 kubelet[2740]: I1009 07:14:39.858289 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-policysync\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.858608 kubelet[2740]: I1009 07:14:39.858505 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-var-run-calico\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.858830 kubelet[2740]: I1009 07:14:39.858547 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-cni-net-dir\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.858830 kubelet[2740]: I1009 07:14:39.858783 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-flexvol-driver-host\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.858830 kubelet[2740]: I1009 07:14:39.858819 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/124fdd79-53a3-41c1-a1fc-b5eb7ae269fd-var-lib-calico\") pod \"calico-node-9kxxf\" (UID: \"124fdd79-53a3-41c1-a1fc-b5eb7ae269fd\") " pod="calico-system/calico-node-9kxxf" Oct 9 07:14:39.960605 kubelet[2740]: E1009 07:14:39.960576 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:39.960605 kubelet[2740]: W1009 07:14:39.960595 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:39.960605 kubelet[2740]: E1009 07:14:39.960615 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:39.963319 kubelet[2740]: E1009 07:14:39.963290 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:39.963319 kubelet[2740]: W1009 07:14:39.963309 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:39.963319 kubelet[2740]: E1009 07:14:39.963328 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.024549 kubelet[2740]: E1009 07:14:40.024436 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:40.025268 containerd[1586]: time="2024-10-09T07:14:40.024990106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d97dbc964-r7blq,Uid:fcdd51dc-8690-43f8-85f2-8364e9ff7980,Namespace:calico-system,Attempt:0,}" Oct 9 07:14:40.060257 kubelet[2740]: E1009 07:14:40.060225 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.060257 kubelet[2740]: W1009 07:14:40.060250 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.060363 kubelet[2740]: E1009 07:14:40.060269 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.116393 kubelet[2740]: E1009 07:14:40.116356 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.116595 kubelet[2740]: W1009 07:14:40.116440 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.116595 kubelet[2740]: E1009 07:14:40.116463 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.151632 kubelet[2740]: I1009 07:14:40.151582 2740 topology_manager.go:215] "Topology Admit Handler" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" podNamespace="calico-system" podName="csi-node-driver-xgjcs" Oct 9 07:14:40.152482 kubelet[2740]: E1009 07:14:40.152447 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgjcs" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" Oct 9 07:14:40.155179 kubelet[2740]: E1009 07:14:40.155135 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.155179 kubelet[2740]: W1009 07:14:40.155167 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.155307 kubelet[2740]: E1009 07:14:40.155231 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.155493 kubelet[2740]: E1009 07:14:40.155474 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.155493 kubelet[2740]: W1009 07:14:40.155486 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.155604 kubelet[2740]: E1009 07:14:40.155498 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.155792 kubelet[2740]: E1009 07:14:40.155714 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.155792 kubelet[2740]: W1009 07:14:40.155730 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.155792 kubelet[2740]: E1009 07:14:40.155746 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.155980 kubelet[2740]: E1009 07:14:40.155957 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.155980 kubelet[2740]: W1009 07:14:40.155964 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.155980 kubelet[2740]: E1009 07:14:40.155974 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.156181 kubelet[2740]: E1009 07:14:40.156156 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.156181 kubelet[2740]: W1009 07:14:40.156167 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.156181 kubelet[2740]: E1009 07:14:40.156177 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.156407 kubelet[2740]: E1009 07:14:40.156393 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.156407 kubelet[2740]: W1009 07:14:40.156403 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.156491 kubelet[2740]: E1009 07:14:40.156414 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.156605 kubelet[2740]: E1009 07:14:40.156588 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.156605 kubelet[2740]: W1009 07:14:40.156601 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.156698 kubelet[2740]: E1009 07:14:40.156617 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.156880 kubelet[2740]: E1009 07:14:40.156858 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.156880 kubelet[2740]: W1009 07:14:40.156868 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.156880 kubelet[2740]: E1009 07:14:40.156880 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.157286 kubelet[2740]: E1009 07:14:40.157265 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.157286 kubelet[2740]: W1009 07:14:40.157278 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.157286 kubelet[2740]: E1009 07:14:40.157291 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.157538 kubelet[2740]: E1009 07:14:40.157507 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.157538 kubelet[2740]: W1009 07:14:40.157518 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.157538 kubelet[2740]: E1009 07:14:40.157528 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.157734 kubelet[2740]: E1009 07:14:40.157718 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.157734 kubelet[2740]: W1009 07:14:40.157728 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.157734 kubelet[2740]: E1009 07:14:40.157739 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.158016 kubelet[2740]: E1009 07:14:40.157988 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.158016 kubelet[2740]: W1009 07:14:40.158005 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.158016 kubelet[2740]: E1009 07:14:40.158023 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.158317 kubelet[2740]: E1009 07:14:40.158293 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.158317 kubelet[2740]: W1009 07:14:40.158310 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.158397 kubelet[2740]: E1009 07:14:40.158324 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.158612 kubelet[2740]: E1009 07:14:40.158591 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.158774 kubelet[2740]: W1009 07:14:40.158651 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.158774 kubelet[2740]: E1009 07:14:40.158669 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.159112 kubelet[2740]: E1009 07:14:40.159050 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.159112 kubelet[2740]: W1009 07:14:40.159061 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.159112 kubelet[2740]: E1009 07:14:40.159074 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.159591 kubelet[2740]: E1009 07:14:40.159471 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.159591 kubelet[2740]: W1009 07:14:40.159482 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.159591 kubelet[2740]: E1009 07:14:40.159492 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.159854 kubelet[2740]: E1009 07:14:40.159843 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.160002 kubelet[2740]: W1009 07:14:40.159909 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.160002 kubelet[2740]: E1009 07:14:40.159934 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.160332 kubelet[2740]: E1009 07:14:40.160252 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.160332 kubelet[2740]: W1009 07:14:40.160263 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.160332 kubelet[2740]: E1009 07:14:40.160274 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.160646 kubelet[2740]: E1009 07:14:40.160566 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.160646 kubelet[2740]: W1009 07:14:40.160575 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.160646 kubelet[2740]: E1009 07:14:40.160586 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.161026 kubelet[2740]: E1009 07:14:40.160866 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.161026 kubelet[2740]: W1009 07:14:40.160876 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.161026 kubelet[2740]: E1009 07:14:40.160886 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.161396 kubelet[2740]: E1009 07:14:40.161360 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.161396 kubelet[2740]: W1009 07:14:40.161370 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.161396 kubelet[2740]: E1009 07:14:40.161381 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.161593 kubelet[2740]: I1009 07:14:40.161515 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2dq\" (UniqueName: \"kubernetes.io/projected/f2ae45c7-48f4-4c14-9998-b19005636b8c-kube-api-access-zb2dq\") pod \"csi-node-driver-xgjcs\" (UID: \"f2ae45c7-48f4-4c14-9998-b19005636b8c\") " pod="calico-system/csi-node-driver-xgjcs" Oct 9 07:14:40.161892 kubelet[2740]: E1009 07:14:40.161783 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.161892 kubelet[2740]: W1009 07:14:40.161795 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.161892 kubelet[2740]: E1009 07:14:40.161808 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.161892 kubelet[2740]: I1009 07:14:40.161827 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2ae45c7-48f4-4c14-9998-b19005636b8c-kubelet-dir\") pod \"csi-node-driver-xgjcs\" (UID: \"f2ae45c7-48f4-4c14-9998-b19005636b8c\") " pod="calico-system/csi-node-driver-xgjcs" Oct 9 07:14:40.162251 kubelet[2740]: E1009 07:14:40.162154 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.162251 kubelet[2740]: W1009 07:14:40.162165 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.162251 kubelet[2740]: E1009 07:14:40.162182 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.162251 kubelet[2740]: I1009 07:14:40.162200 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f2ae45c7-48f4-4c14-9998-b19005636b8c-socket-dir\") pod \"csi-node-driver-xgjcs\" (UID: \"f2ae45c7-48f4-4c14-9998-b19005636b8c\") " pod="calico-system/csi-node-driver-xgjcs" Oct 9 07:14:40.162678 kubelet[2740]: E1009 07:14:40.162563 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.162678 kubelet[2740]: W1009 07:14:40.162574 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.162678 kubelet[2740]: E1009 07:14:40.162596 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.162678 kubelet[2740]: I1009 07:14:40.162615 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f2ae45c7-48f4-4c14-9998-b19005636b8c-varrun\") pod \"csi-node-driver-xgjcs\" (UID: \"f2ae45c7-48f4-4c14-9998-b19005636b8c\") " pod="calico-system/csi-node-driver-xgjcs" Oct 9 07:14:40.163131 kubelet[2740]: E1009 07:14:40.162994 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.163131 kubelet[2740]: W1009 07:14:40.163006 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.163131 kubelet[2740]: E1009 07:14:40.163063 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.163131 kubelet[2740]: I1009 07:14:40.163098 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f2ae45c7-48f4-4c14-9998-b19005636b8c-registration-dir\") pod \"csi-node-driver-xgjcs\" (UID: \"f2ae45c7-48f4-4c14-9998-b19005636b8c\") " pod="calico-system/csi-node-driver-xgjcs" Oct 9 07:14:40.163467 kubelet[2740]: E1009 07:14:40.163396 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.163467 kubelet[2740]: W1009 07:14:40.163405 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.163624 kubelet[2740]: E1009 07:14:40.163537 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.163720 kubelet[2740]: E1009 07:14:40.163710 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.163816 kubelet[2740]: W1009 07:14:40.163765 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.163888 kubelet[2740]: E1009 07:14:40.163850 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.164131 kubelet[2740]: E1009 07:14:40.164059 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.164131 kubelet[2740]: W1009 07:14:40.164068 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.164311 kubelet[2740]: E1009 07:14:40.164210 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.164398 kubelet[2740]: E1009 07:14:40.164389 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.164500 kubelet[2740]: W1009 07:14:40.164447 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.164609 kubelet[2740]: E1009 07:14:40.164558 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.164818 kubelet[2740]: E1009 07:14:40.164745 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.164818 kubelet[2740]: W1009 07:14:40.164754 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.164909 kubelet[2740]: E1009 07:14:40.164891 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.165216 kubelet[2740]: E1009 07:14:40.165122 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.165216 kubelet[2740]: W1009 07:14:40.165132 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.165216 kubelet[2740]: E1009 07:14:40.165142 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.165552 kubelet[2740]: E1009 07:14:40.165452 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.165552 kubelet[2740]: W1009 07:14:40.165462 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.165552 kubelet[2740]: E1009 07:14:40.165472 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.166011 kubelet[2740]: E1009 07:14:40.165811 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.166011 kubelet[2740]: W1009 07:14:40.165820 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.166011 kubelet[2740]: E1009 07:14:40.165831 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.166236 kubelet[2740]: E1009 07:14:40.166160 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.166236 kubelet[2740]: W1009 07:14:40.166174 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.166236 kubelet[2740]: E1009 07:14:40.166184 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.166518 kubelet[2740]: E1009 07:14:40.166485 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.166518 kubelet[2740]: W1009 07:14:40.166494 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.166518 kubelet[2740]: E1009 07:14:40.166505 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.264078 kubelet[2740]: E1009 07:14:40.264039 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.264078 kubelet[2740]: W1009 07:14:40.264059 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.264078 kubelet[2740]: E1009 07:14:40.264078 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.264343 kubelet[2740]: E1009 07:14:40.264317 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.264343 kubelet[2740]: W1009 07:14:40.264334 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.264394 kubelet[2740]: E1009 07:14:40.264356 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.264618 kubelet[2740]: E1009 07:14:40.264601 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.264618 kubelet[2740]: W1009 07:14:40.264615 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.264680 kubelet[2740]: E1009 07:14:40.264630 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.264854 kubelet[2740]: E1009 07:14:40.264838 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.264854 kubelet[2740]: W1009 07:14:40.264851 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.264910 kubelet[2740]: E1009 07:14:40.264871 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.265179 kubelet[2740]: E1009 07:14:40.265161 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.265179 kubelet[2740]: W1009 07:14:40.265176 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.265256 kubelet[2740]: E1009 07:14:40.265195 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.265437 kubelet[2740]: E1009 07:14:40.265413 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.265437 kubelet[2740]: W1009 07:14:40.265428 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.265487 kubelet[2740]: E1009 07:14:40.265444 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.265638 kubelet[2740]: E1009 07:14:40.265624 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.265638 kubelet[2740]: W1009 07:14:40.265635 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.265693 kubelet[2740]: E1009 07:14:40.265665 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.265812 kubelet[2740]: E1009 07:14:40.265798 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.265812 kubelet[2740]: W1009 07:14:40.265808 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.265862 kubelet[2740]: E1009 07:14:40.265837 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.265999 kubelet[2740]: E1009 07:14:40.265985 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.265999 kubelet[2740]: W1009 07:14:40.265995 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.266065 kubelet[2740]: E1009 07:14:40.266023 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.266189 kubelet[2740]: E1009 07:14:40.266166 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.266189 kubelet[2740]: W1009 07:14:40.266176 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.266189 kubelet[2740]: E1009 07:14:40.266189 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.266421 kubelet[2740]: E1009 07:14:40.266404 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.266453 kubelet[2740]: W1009 07:14:40.266422 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.266505 kubelet[2740]: E1009 07:14:40.266465 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.266660 kubelet[2740]: E1009 07:14:40.266643 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.266660 kubelet[2740]: W1009 07:14:40.266655 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.266728 kubelet[2740]: E1009 07:14:40.266688 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.266846 kubelet[2740]: E1009 07:14:40.266831 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.266846 kubelet[2740]: W1009 07:14:40.266841 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.266952 kubelet[2740]: E1009 07:14:40.266862 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.267109 kubelet[2740]: E1009 07:14:40.267093 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.267109 kubelet[2740]: W1009 07:14:40.267104 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.267193 kubelet[2740]: E1009 07:14:40.267137 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.267308 kubelet[2740]: E1009 07:14:40.267291 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.267308 kubelet[2740]: W1009 07:14:40.267302 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.267394 kubelet[2740]: E1009 07:14:40.267335 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.267536 kubelet[2740]: E1009 07:14:40.267518 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.267536 kubelet[2740]: W1009 07:14:40.267531 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.267608 kubelet[2740]: E1009 07:14:40.267575 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.267821 kubelet[2740]: E1009 07:14:40.267806 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.267851 kubelet[2740]: W1009 07:14:40.267820 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.267988 kubelet[2740]: E1009 07:14:40.267902 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.268152 kubelet[2740]: E1009 07:14:40.268121 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.268152 kubelet[2740]: W1009 07:14:40.268135 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.268203 kubelet[2740]: E1009 07:14:40.268174 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.268391 kubelet[2740]: E1009 07:14:40.268370 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.268391 kubelet[2740]: W1009 07:14:40.268383 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.268469 kubelet[2740]: E1009 07:14:40.268405 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.268702 kubelet[2740]: E1009 07:14:40.268680 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.268702 kubelet[2740]: W1009 07:14:40.268693 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.268760 kubelet[2740]: E1009 07:14:40.268712 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.269023 kubelet[2740]: E1009 07:14:40.269006 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.269056 kubelet[2740]: W1009 07:14:40.269022 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.269056 kubelet[2740]: E1009 07:14:40.269044 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.269385 kubelet[2740]: E1009 07:14:40.269363 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.269385 kubelet[2740]: W1009 07:14:40.269388 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.269465 kubelet[2740]: E1009 07:14:40.269404 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.269656 kubelet[2740]: E1009 07:14:40.269639 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.269656 kubelet[2740]: W1009 07:14:40.269650 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.269755 kubelet[2740]: E1009 07:14:40.269668 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.269967 kubelet[2740]: E1009 07:14:40.269907 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.269967 kubelet[2740]: W1009 07:14:40.269961 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.270066 kubelet[2740]: E1009 07:14:40.269979 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.270213 kubelet[2740]: E1009 07:14:40.270197 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.270213 kubelet[2740]: W1009 07:14:40.270207 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.270213 kubelet[2740]: E1009 07:14:40.270217 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.363841 kubelet[2740]: E1009 07:14:40.362741 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:40.363841 kubelet[2740]: W1009 07:14:40.362764 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:40.363841 kubelet[2740]: E1009 07:14:40.362787 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:40.368226 containerd[1586]: time="2024-10-09T07:14:40.366475977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:14:40.368226 containerd[1586]: time="2024-10-09T07:14:40.366554034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:40.368226 containerd[1586]: time="2024-10-09T07:14:40.366589330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:14:40.368226 containerd[1586]: time="2024-10-09T07:14:40.366605530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:40.368465 kubelet[2740]: E1009 07:14:40.367449 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:40.372684 containerd[1586]: time="2024-10-09T07:14:40.372640008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9kxxf,Uid:124fdd79-53a3-41c1-a1fc-b5eb7ae269fd,Namespace:calico-system,Attempt:0,}" Oct 9 07:14:40.418153 containerd[1586]: time="2024-10-09T07:14:40.417769139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:14:40.418153 containerd[1586]: time="2024-10-09T07:14:40.417835183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:40.418153 containerd[1586]: time="2024-10-09T07:14:40.417852286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:14:40.418153 containerd[1586]: time="2024-10-09T07:14:40.417863647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:14:40.443603 containerd[1586]: time="2024-10-09T07:14:40.443559842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d97dbc964-r7blq,Uid:fcdd51dc-8690-43f8-85f2-8364e9ff7980,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e51da95845ed412c010d832cc8ee35ef7d2f575768324d25d335db96d576742\"" Oct 9 07:14:40.444335 kubelet[2740]: E1009 07:14:40.444306 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:40.445214 containerd[1586]: time="2024-10-09T07:14:40.445162834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:14:40.469697 containerd[1586]: time="2024-10-09T07:14:40.469656452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9kxxf,Uid:124fdd79-53a3-41c1-a1fc-b5eb7ae269fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"5314ed7b02174a7703e252b150327b3d609bd1662b0a4fcb08295794e3ef6483\"" Oct 9 07:14:40.471602 kubelet[2740]: E1009 07:14:40.471452 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:41.839514 kubelet[2740]: E1009 07:14:41.839472 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgjcs" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" Oct 9 07:14:42.828746 containerd[1586]: time="2024-10-09T07:14:42.828687306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:42.829542 containerd[1586]: time="2024-10-09T07:14:42.829497011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:14:42.830499 containerd[1586]: time="2024-10-09T07:14:42.830463422Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:42.833325 containerd[1586]: time="2024-10-09T07:14:42.833292613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:42.833670 containerd[1586]: time="2024-10-09T07:14:42.833648964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.388457786s" Oct 9 07:14:42.833712 containerd[1586]: time="2024-10-09T07:14:42.833676305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:14:42.839979 containerd[1586]: time="2024-10-09T07:14:42.839945937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:14:42.847189 containerd[1586]: time="2024-10-09T07:14:42.847155741Z" level=info msg="CreateContainer within sandbox \"1e51da95845ed412c010d832cc8ee35ef7d2f575768324d25d335db96d576742\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:14:42.860553 containerd[1586]: time="2024-10-09T07:14:42.860520080Z" level=info msg="CreateContainer within sandbox \"1e51da95845ed412c010d832cc8ee35ef7d2f575768324d25d335db96d576742\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"34bb1b25f24b6454aa3fe02edbe5b0d1f6493827e3214329459dc975a0c1b2d2\"" Oct 9 07:14:42.860957 containerd[1586]: time="2024-10-09T07:14:42.860927588Z" level=info msg="StartContainer for \"34bb1b25f24b6454aa3fe02edbe5b0d1f6493827e3214329459dc975a0c1b2d2\"" Oct 9 07:14:42.933013 containerd[1586]: time="2024-10-09T07:14:42.932130767Z" level=info msg="StartContainer for \"34bb1b25f24b6454aa3fe02edbe5b0d1f6493827e3214329459dc975a0c1b2d2\" returns successfully" Oct 9 07:14:43.841751 kubelet[2740]: E1009 07:14:43.841697 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgjcs" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" Oct 9 07:14:43.900056 kubelet[2740]: E1009 07:14:43.900024 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:43.990070 kubelet[2740]: E1009 07:14:43.990038 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.990070 kubelet[2740]: W1009 07:14:43.990071 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.990306 kubelet[2740]: E1009 07:14:43.990101 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.990402 kubelet[2740]: E1009 07:14:43.990391 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.990435 kubelet[2740]: W1009 07:14:43.990400 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.990435 kubelet[2740]: E1009 07:14:43.990412 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.990611 kubelet[2740]: E1009 07:14:43.990600 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.990611 kubelet[2740]: W1009 07:14:43.990609 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.990672 kubelet[2740]: E1009 07:14:43.990619 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.990809 kubelet[2740]: E1009 07:14:43.990798 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.990809 kubelet[2740]: W1009 07:14:43.990807 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.990861 kubelet[2740]: E1009 07:14:43.990817 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.991027 kubelet[2740]: E1009 07:14:43.991016 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.991027 kubelet[2740]: W1009 07:14:43.991026 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.991103 kubelet[2740]: E1009 07:14:43.991035 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.991224 kubelet[2740]: E1009 07:14:43.991213 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.991224 kubelet[2740]: W1009 07:14:43.991221 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.991293 kubelet[2740]: E1009 07:14:43.991231 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.991412 kubelet[2740]: E1009 07:14:43.991402 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.991412 kubelet[2740]: W1009 07:14:43.991410 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.991471 kubelet[2740]: E1009 07:14:43.991419 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.991597 kubelet[2740]: E1009 07:14:43.991586 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.991597 kubelet[2740]: W1009 07:14:43.991595 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.991646 kubelet[2740]: E1009 07:14:43.991604 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.991786 kubelet[2740]: E1009 07:14:43.991776 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.991786 kubelet[2740]: W1009 07:14:43.991784 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.991852 kubelet[2740]: E1009 07:14:43.991794 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.991986 kubelet[2740]: E1009 07:14:43.991975 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.991986 kubelet[2740]: W1009 07:14:43.991984 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.992044 kubelet[2740]: E1009 07:14:43.991993 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.992243 kubelet[2740]: E1009 07:14:43.992233 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.992243 kubelet[2740]: W1009 07:14:43.992242 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.992307 kubelet[2740]: E1009 07:14:43.992251 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.992443 kubelet[2740]: E1009 07:14:43.992433 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.992443 kubelet[2740]: W1009 07:14:43.992442 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.992488 kubelet[2740]: E1009 07:14:43.992451 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.992628 kubelet[2740]: E1009 07:14:43.992618 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.992628 kubelet[2740]: W1009 07:14:43.992626 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.992682 kubelet[2740]: E1009 07:14:43.992636 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.992809 kubelet[2740]: E1009 07:14:43.992799 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.992809 kubelet[2740]: W1009 07:14:43.992807 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.992868 kubelet[2740]: E1009 07:14:43.992816 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:43.993007 kubelet[2740]: E1009 07:14:43.992997 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:43.993007 kubelet[2740]: W1009 07:14:43.993006 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:43.993072 kubelet[2740]: E1009 07:14:43.993015 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.089244 kubelet[2740]: E1009 07:14:44.089203 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.089244 kubelet[2740]: W1009 07:14:44.089225 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.089244 kubelet[2740]: E1009 07:14:44.089247 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.089577 kubelet[2740]: E1009 07:14:44.089551 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.089577 kubelet[2740]: W1009 07:14:44.089572 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.089644 kubelet[2740]: E1009 07:14:44.089603 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.089863 kubelet[2740]: E1009 07:14:44.089841 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.089863 kubelet[2740]: W1009 07:14:44.089856 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.089934 kubelet[2740]: E1009 07:14:44.089875 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.090109 kubelet[2740]: E1009 07:14:44.090095 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.090109 kubelet[2740]: W1009 07:14:44.090105 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.090180 kubelet[2740]: E1009 07:14:44.090125 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.090357 kubelet[2740]: E1009 07:14:44.090343 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.090357 kubelet[2740]: W1009 07:14:44.090354 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.090410 kubelet[2740]: E1009 07:14:44.090370 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.090608 kubelet[2740]: E1009 07:14:44.090587 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.090608 kubelet[2740]: W1009 07:14:44.090598 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.090667 kubelet[2740]: E1009 07:14:44.090612 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.090845 kubelet[2740]: E1009 07:14:44.090829 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.090845 kubelet[2740]: W1009 07:14:44.090842 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.090895 kubelet[2740]: E1009 07:14:44.090860 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.091113 kubelet[2740]: E1009 07:14:44.091096 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.091113 kubelet[2740]: W1009 07:14:44.091107 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.091203 kubelet[2740]: E1009 07:14:44.091152 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.091334 kubelet[2740]: E1009 07:14:44.091316 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.091334 kubelet[2740]: W1009 07:14:44.091327 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.091394 kubelet[2740]: E1009 07:14:44.091356 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.091531 kubelet[2740]: E1009 07:14:44.091513 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.091531 kubelet[2740]: W1009 07:14:44.091524 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.091590 kubelet[2740]: E1009 07:14:44.091542 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.091747 kubelet[2740]: E1009 07:14:44.091731 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.091747 kubelet[2740]: W1009 07:14:44.091741 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.091848 kubelet[2740]: E1009 07:14:44.091755 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.092014 kubelet[2740]: E1009 07:14:44.092001 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.092047 kubelet[2740]: W1009 07:14:44.092014 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.092047 kubelet[2740]: E1009 07:14:44.092036 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.092484 kubelet[2740]: E1009 07:14:44.092263 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.092484 kubelet[2740]: W1009 07:14:44.092273 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.092484 kubelet[2740]: E1009 07:14:44.092284 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.092602 kubelet[2740]: E1009 07:14:44.092515 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.092602 kubelet[2740]: W1009 07:14:44.092522 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.092602 kubelet[2740]: E1009 07:14:44.092538 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.093082 kubelet[2740]: E1009 07:14:44.093015 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.093082 kubelet[2740]: W1009 07:14:44.093025 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.093149 kubelet[2740]: E1009 07:14:44.093082 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.093393 kubelet[2740]: E1009 07:14:44.093378 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.093481 kubelet[2740]: W1009 07:14:44.093392 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.093481 kubelet[2740]: E1009 07:14:44.093446 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.093758 kubelet[2740]: E1009 07:14:44.093704 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.093758 kubelet[2740]: W1009 07:14:44.093718 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.093958 kubelet[2740]: E1009 07:14:44.093733 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.094179 kubelet[2740]: E1009 07:14:44.094158 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:14:44.094179 kubelet[2740]: W1009 07:14:44.094177 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:14:44.094240 kubelet[2740]: E1009 07:14:44.094189 2740 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:14:44.715410 containerd[1586]: time="2024-10-09T07:14:44.715362548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:44.716134 containerd[1586]: time="2024-10-09T07:14:44.716095939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:14:44.717317 containerd[1586]: time="2024-10-09T07:14:44.717289637Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:44.719790 containerd[1586]: time="2024-10-09T07:14:44.719724802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:44.720396 containerd[1586]: time="2024-10-09T07:14:44.720343858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.880367203s" Oct 9 07:14:44.720456 containerd[1586]: time="2024-10-09T07:14:44.720389082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:14:44.721799 containerd[1586]: time="2024-10-09T07:14:44.721778819Z" level=info msg="CreateContainer within sandbox \"5314ed7b02174a7703e252b150327b3d609bd1662b0a4fcb08295794e3ef6483\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:14:44.775523 containerd[1586]: time="2024-10-09T07:14:44.775466421Z" level=info msg="CreateContainer within sandbox \"5314ed7b02174a7703e252b150327b3d609bd1662b0a4fcb08295794e3ef6483\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a8c61c244646ad124f81db7093c25f4b4190ec650f9b92b3f719204f3fb93b17\"" Oct 9 07:14:44.776275 containerd[1586]: time="2024-10-09T07:14:44.776191506Z" level=info msg="StartContainer for \"a8c61c244646ad124f81db7093c25f4b4190ec650f9b92b3f719204f3fb93b17\"" Oct 9 07:14:44.838007 containerd[1586]: time="2024-10-09T07:14:44.837954403Z" level=info msg="StartContainer for \"a8c61c244646ad124f81db7093c25f4b4190ec650f9b92b3f719204f3fb93b17\" returns successfully" Oct 9 07:14:44.876304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8c61c244646ad124f81db7093c25f4b4190ec650f9b92b3f719204f3fb93b17-rootfs.mount: Deactivated successfully. Oct 9 07:14:44.904610 kubelet[2740]: E1009 07:14:44.904548 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:45.003348 kubelet[2740]: I1009 07:14:45.003213 2740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:14:45.004065 kubelet[2740]: E1009 07:14:45.004048 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:45.007802 containerd[1586]: time="2024-10-09T07:14:45.007718335Z" level=info msg="shim disconnected" id=a8c61c244646ad124f81db7093c25f4b4190ec650f9b92b3f719204f3fb93b17 namespace=k8s.io Oct 9 07:14:45.007802 containerd[1586]: time="2024-10-09T07:14:45.007783990Z" level=warning msg="cleaning up after shim disconnected" id=a8c61c244646ad124f81db7093c25f4b4190ec650f9b92b3f719204f3fb93b17 namespace=k8s.io Oct 9 07:14:45.007802 containerd[1586]: time="2024-10-09T07:14:45.007793608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:14:45.039528 kubelet[2740]: I1009 07:14:45.039484 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7d97dbc964-r7blq" podStartSLOduration=3.646136531 podStartE2EDuration="6.035181744s" podCreationTimestamp="2024-10-09 07:14:39 +0000 UTC" firstStartedPulling="2024-10-09 07:14:40.44498037 +0000 UTC m=+20.700314785" lastFinishedPulling="2024-10-09 07:14:42.834025583 +0000 UTC m=+23.089359998" observedRunningTime="2024-10-09 07:14:43.911083664 +0000 UTC m=+24.166418079" watchObservedRunningTime="2024-10-09 07:14:45.035181744 +0000 UTC m=+25.290516159" Oct 9 07:14:45.838616 kubelet[2740]: E1009 07:14:45.838565 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgjcs" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" Oct 9 07:14:45.906397 kubelet[2740]: E1009 07:14:45.906355 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:45.907672 containerd[1586]: time="2024-10-09T07:14:45.907393643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:14:47.839205 kubelet[2740]: E1009 07:14:47.839162 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgjcs" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" Oct 9 07:14:49.019148 systemd[1]: Started sshd@7-10.0.0.30:22-10.0.0.1:41538.service - OpenSSH per-connection server daemon (10.0.0.1:41538). Oct 9 07:14:49.140980 kubelet[2740]: I1009 07:14:49.140911 2740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:14:49.141658 kubelet[2740]: E1009 07:14:49.141628 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:49.358868 sshd[3451]: Accepted publickey for core from 10.0.0.1 port 41538 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:14:49.363905 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:14:49.373617 systemd-logind[1569]: New session 8 of user core. Oct 9 07:14:49.381713 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:14:49.527225 sshd[3451]: pam_unix(sshd:session): session closed for user core Oct 9 07:14:49.533164 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:14:49.534152 systemd[1]: sshd@7-10.0.0.30:22-10.0.0.1:41538.service: Deactivated successfully. Oct 9 07:14:49.537202 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:14:49.538690 systemd-logind[1569]: Removed session 8. Oct 9 07:14:49.839407 kubelet[2740]: E1009 07:14:49.839280 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgjcs" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" Oct 9 07:14:49.913331 kubelet[2740]: E1009 07:14:49.913120 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:50.510581 containerd[1586]: time="2024-10-09T07:14:50.510529293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:50.511344 containerd[1586]: time="2024-10-09T07:14:50.511291175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:14:50.512432 containerd[1586]: time="2024-10-09T07:14:50.512389871Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:50.514678 containerd[1586]: time="2024-10-09T07:14:50.514638620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:50.515364 containerd[1586]: time="2024-10-09T07:14:50.515331964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.607894168s" Oct 9 07:14:50.515429 containerd[1586]: time="2024-10-09T07:14:50.515373111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:14:50.517067 containerd[1586]: time="2024-10-09T07:14:50.517009608Z" level=info msg="CreateContainer within sandbox \"5314ed7b02174a7703e252b150327b3d609bd1662b0a4fcb08295794e3ef6483\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:14:50.531061 containerd[1586]: time="2024-10-09T07:14:50.530975519Z" level=info msg="CreateContainer within sandbox \"5314ed7b02174a7703e252b150327b3d609bd1662b0a4fcb08295794e3ef6483\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"36c27726847254811648ddbe97f569eba68d1c67176a3763ad2b1c5ae7c8df76\"" Oct 9 07:14:50.531541 containerd[1586]: time="2024-10-09T07:14:50.531492171Z" level=info msg="StartContainer for \"36c27726847254811648ddbe97f569eba68d1c67176a3763ad2b1c5ae7c8df76\"" Oct 9 07:14:50.946480 containerd[1586]: time="2024-10-09T07:14:50.946424959Z" level=info msg="StartContainer for \"36c27726847254811648ddbe97f569eba68d1c67176a3763ad2b1c5ae7c8df76\" returns successfully" Oct 9 07:14:51.839598 kubelet[2740]: E1009 07:14:51.839551 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgjcs" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" Oct 9 07:14:51.950365 kubelet[2740]: E1009 07:14:51.950333 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:51.996471 containerd[1586]: time="2024-10-09T07:14:51.996387270Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:14:52.023605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36c27726847254811648ddbe97f569eba68d1c67176a3763ad2b1c5ae7c8df76-rootfs.mount: Deactivated successfully. Oct 9 07:14:52.026799 containerd[1586]: time="2024-10-09T07:14:52.026734345Z" level=info msg="shim disconnected" id=36c27726847254811648ddbe97f569eba68d1c67176a3763ad2b1c5ae7c8df76 namespace=k8s.io Oct 9 07:14:52.026799 containerd[1586]: time="2024-10-09T07:14:52.026793036Z" level=warning msg="cleaning up after shim disconnected" id=36c27726847254811648ddbe97f569eba68d1c67176a3763ad2b1c5ae7c8df76 namespace=k8s.io Oct 9 07:14:52.028177 containerd[1586]: time="2024-10-09T07:14:52.026806181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:14:52.028219 kubelet[2740]: I1009 07:14:52.027098 2740 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:14:52.050209 kubelet[2740]: I1009 07:14:52.050106 2740 topology_manager.go:215] "Topology Admit Handler" podUID="781aa2fc-41fc-40b3-9700-ecfd097a2855" podNamespace="kube-system" podName="coredns-76f75df574-t6jwx" Oct 9 07:14:52.053411 kubelet[2740]: I1009 07:14:52.053018 2740 topology_manager.go:215] "Topology Admit Handler" podUID="6da8c6fb-d852-4cac-a809-b18748e45975" podNamespace="kube-system" podName="coredns-76f75df574-ljpqg" Oct 9 07:14:52.054228 kubelet[2740]: I1009 07:14:52.054101 2740 topology_manager.go:215] "Topology Admit Handler" podUID="04d277c1-6044-4d8c-9a67-d2697166170d" podNamespace="calico-system" podName="calico-kube-controllers-6bbc98dcd-jk426" Oct 9 07:14:52.248237 kubelet[2740]: I1009 07:14:52.248156 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mfql\" (UniqueName: \"kubernetes.io/projected/781aa2fc-41fc-40b3-9700-ecfd097a2855-kube-api-access-4mfql\") pod \"coredns-76f75df574-t6jwx\" (UID: \"781aa2fc-41fc-40b3-9700-ecfd097a2855\") " pod="kube-system/coredns-76f75df574-t6jwx" Oct 9 07:14:52.248237 kubelet[2740]: I1009 07:14:52.248242 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbphr\" (UniqueName: \"kubernetes.io/projected/04d277c1-6044-4d8c-9a67-d2697166170d-kube-api-access-pbphr\") pod \"calico-kube-controllers-6bbc98dcd-jk426\" (UID: \"04d277c1-6044-4d8c-9a67-d2697166170d\") " pod="calico-system/calico-kube-controllers-6bbc98dcd-jk426" Oct 9 07:14:52.248514 kubelet[2740]: I1009 07:14:52.248352 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9clv5\" (UniqueName: \"kubernetes.io/projected/6da8c6fb-d852-4cac-a809-b18748e45975-kube-api-access-9clv5\") pod \"coredns-76f75df574-ljpqg\" (UID: \"6da8c6fb-d852-4cac-a809-b18748e45975\") " pod="kube-system/coredns-76f75df574-ljpqg" Oct 9 07:14:52.248514 kubelet[2740]: I1009 07:14:52.248393 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d277c1-6044-4d8c-9a67-d2697166170d-tigera-ca-bundle\") pod \"calico-kube-controllers-6bbc98dcd-jk426\" (UID: \"04d277c1-6044-4d8c-9a67-d2697166170d\") " pod="calico-system/calico-kube-controllers-6bbc98dcd-jk426" Oct 9 07:14:52.248514 kubelet[2740]: I1009 07:14:52.248419 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6da8c6fb-d852-4cac-a809-b18748e45975-config-volume\") pod \"coredns-76f75df574-ljpqg\" (UID: \"6da8c6fb-d852-4cac-a809-b18748e45975\") " pod="kube-system/coredns-76f75df574-ljpqg" Oct 9 07:14:52.248514 kubelet[2740]: I1009 07:14:52.248507 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/781aa2fc-41fc-40b3-9700-ecfd097a2855-config-volume\") pod \"coredns-76f75df574-t6jwx\" (UID: \"781aa2fc-41fc-40b3-9700-ecfd097a2855\") " pod="kube-system/coredns-76f75df574-t6jwx" Oct 9 07:14:52.362533 kubelet[2740]: E1009 07:14:52.362497 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:52.364254 containerd[1586]: time="2024-10-09T07:14:52.364225251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bbc98dcd-jk426,Uid:04d277c1-6044-4d8c-9a67-d2697166170d,Namespace:calico-system,Attempt:0,}" Oct 9 07:14:52.364660 containerd[1586]: time="2024-10-09T07:14:52.364225742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljpqg,Uid:6da8c6fb-d852-4cac-a809-b18748e45975,Namespace:kube-system,Attempt:0,}" Oct 9 07:14:52.448895 containerd[1586]: time="2024-10-09T07:14:52.448837180Z" level=error msg="Failed to destroy network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.449382 containerd[1586]: time="2024-10-09T07:14:52.449349494Z" level=error msg="encountered an error cleaning up failed sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.449436 containerd[1586]: time="2024-10-09T07:14:52.449398295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljpqg,Uid:6da8c6fb-d852-4cac-a809-b18748e45975,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.449604 containerd[1586]: time="2024-10-09T07:14:52.449562714Z" level=error msg="Failed to destroy network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.449758 kubelet[2740]: E1009 07:14:52.449709 2740 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.449823 kubelet[2740]: E1009 07:14:52.449777 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ljpqg" Oct 9 07:14:52.449823 kubelet[2740]: E1009 07:14:52.449799 2740 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ljpqg" Oct 9 07:14:52.449877 kubelet[2740]: E1009 07:14:52.449857 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ljpqg_kube-system(6da8c6fb-d852-4cac-a809-b18748e45975)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ljpqg_kube-system(6da8c6fb-d852-4cac-a809-b18748e45975)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ljpqg" podUID="6da8c6fb-d852-4cac-a809-b18748e45975" Oct 9 07:14:52.450061 containerd[1586]: time="2024-10-09T07:14:52.450020184Z" level=error msg="encountered an error cleaning up failed sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.450117 containerd[1586]: time="2024-10-09T07:14:52.450089214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bbc98dcd-jk426,Uid:04d277c1-6044-4d8c-9a67-d2697166170d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.450370 kubelet[2740]: E1009 07:14:52.450348 2740 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.450420 kubelet[2740]: E1009 07:14:52.450407 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bbc98dcd-jk426" Oct 9 07:14:52.450449 kubelet[2740]: E1009 07:14:52.450435 2740 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bbc98dcd-jk426" Oct 9 07:14:52.450525 kubelet[2740]: E1009 07:14:52.450506 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bbc98dcd-jk426_calico-system(04d277c1-6044-4d8c-9a67-d2697166170d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bbc98dcd-jk426_calico-system(04d277c1-6044-4d8c-9a67-d2697166170d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bbc98dcd-jk426" podUID="04d277c1-6044-4d8c-9a67-d2697166170d" Oct 9 07:14:52.657311 kubelet[2740]: E1009 07:14:52.657260 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:52.658062 containerd[1586]: time="2024-10-09T07:14:52.657819162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t6jwx,Uid:781aa2fc-41fc-40b3-9700-ecfd097a2855,Namespace:kube-system,Attempt:0,}" Oct 9 07:14:52.876663 containerd[1586]: time="2024-10-09T07:14:52.876593562Z" level=error msg="Failed to destroy network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.877053 containerd[1586]: time="2024-10-09T07:14:52.877030694Z" level=error msg="encountered an error cleaning up failed sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.877120 containerd[1586]: time="2024-10-09T07:14:52.877079045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t6jwx,Uid:781aa2fc-41fc-40b3-9700-ecfd097a2855,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.877372 kubelet[2740]: E1009 07:14:52.877347 2740 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.877804 kubelet[2740]: E1009 07:14:52.877414 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-t6jwx" Oct 9 07:14:52.877804 kubelet[2740]: E1009 07:14:52.877439 2740 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-t6jwx" Oct 9 07:14:52.877804 kubelet[2740]: E1009 07:14:52.877507 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-t6jwx_kube-system(781aa2fc-41fc-40b3-9700-ecfd097a2855)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-t6jwx_kube-system(781aa2fc-41fc-40b3-9700-ecfd097a2855)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-t6jwx" podUID="781aa2fc-41fc-40b3-9700-ecfd097a2855" Oct 9 07:14:52.953776 kubelet[2740]: E1009 07:14:52.953459 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:52.954633 containerd[1586]: time="2024-10-09T07:14:52.954592930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:14:52.955041 kubelet[2740]: I1009 07:14:52.955018 2740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:14:52.956739 kubelet[2740]: I1009 07:14:52.956716 2740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:14:52.957900 containerd[1586]: time="2024-10-09T07:14:52.957399757Z" level=info msg="StopPodSandbox for \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\"" Oct 9 07:14:52.957900 containerd[1586]: time="2024-10-09T07:14:52.957619931Z" level=info msg="Ensure that sandbox 5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac in task-service has been cleanup successfully" Oct 9 07:14:52.958893 containerd[1586]: time="2024-10-09T07:14:52.958862246Z" level=info msg="StopPodSandbox for \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\"" Oct 9 07:14:52.959116 kubelet[2740]: I1009 07:14:52.959088 2740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:14:52.959346 containerd[1586]: time="2024-10-09T07:14:52.959317271Z" level=info msg="Ensure that sandbox e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e in task-service has been cleanup successfully" Oct 9 07:14:52.959575 containerd[1586]: time="2024-10-09T07:14:52.959546773Z" level=info msg="StopPodSandbox for \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\"" Oct 9 07:14:52.959751 containerd[1586]: time="2024-10-09T07:14:52.959728324Z" level=info msg="Ensure that sandbox f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911 in task-service has been cleanup successfully" Oct 9 07:14:52.988208 containerd[1586]: time="2024-10-09T07:14:52.988122378Z" level=error msg="StopPodSandbox for \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\" failed" error="failed to destroy network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.988716 kubelet[2740]: E1009 07:14:52.988542 2740 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:14:52.988716 kubelet[2740]: E1009 07:14:52.988622 2740 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911"} Oct 9 07:14:52.988716 kubelet[2740]: E1009 07:14:52.988658 2740 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04d277c1-6044-4d8c-9a67-d2697166170d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:14:52.988716 kubelet[2740]: E1009 07:14:52.988686 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04d277c1-6044-4d8c-9a67-d2697166170d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bbc98dcd-jk426" podUID="04d277c1-6044-4d8c-9a67-d2697166170d" Oct 9 07:14:52.992420 containerd[1586]: time="2024-10-09T07:14:52.992378438Z" level=error msg="StopPodSandbox for \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\" failed" error="failed to destroy network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.992687 kubelet[2740]: E1009 07:14:52.992561 2740 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:14:52.992687 kubelet[2740]: E1009 07:14:52.992592 2740 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e"} Oct 9 07:14:52.992687 kubelet[2740]: E1009 07:14:52.992628 2740 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"781aa2fc-41fc-40b3-9700-ecfd097a2855\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:14:52.992687 kubelet[2740]: E1009 07:14:52.992652 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"781aa2fc-41fc-40b3-9700-ecfd097a2855\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-t6jwx" podUID="781aa2fc-41fc-40b3-9700-ecfd097a2855" Oct 9 07:14:52.993595 containerd[1586]: time="2024-10-09T07:14:52.993566602Z" level=error msg="StopPodSandbox for \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\" failed" error="failed to destroy network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:52.993687 kubelet[2740]: E1009 07:14:52.993668 2740 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:14:52.993727 kubelet[2740]: E1009 07:14:52.993690 2740 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac"} Oct 9 07:14:52.993727 kubelet[2740]: E1009 07:14:52.993716 2740 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6da8c6fb-d852-4cac-a809-b18748e45975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:14:52.993798 kubelet[2740]: E1009 07:14:52.993740 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6da8c6fb-d852-4cac-a809-b18748e45975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ljpqg" podUID="6da8c6fb-d852-4cac-a809-b18748e45975" Oct 9 07:14:53.023568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac-shm.mount: Deactivated successfully. Oct 9 07:14:53.023781 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911-shm.mount: Deactivated successfully. Oct 9 07:14:53.842928 containerd[1586]: time="2024-10-09T07:14:53.842852791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xgjcs,Uid:f2ae45c7-48f4-4c14-9998-b19005636b8c,Namespace:calico-system,Attempt:0,}" Oct 9 07:14:53.904261 containerd[1586]: time="2024-10-09T07:14:53.904185814Z" level=error msg="Failed to destroy network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:53.904751 containerd[1586]: time="2024-10-09T07:14:53.904723915Z" level=error msg="encountered an error cleaning up failed sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:53.904829 containerd[1586]: time="2024-10-09T07:14:53.904776173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xgjcs,Uid:f2ae45c7-48f4-4c14-9998-b19005636b8c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:53.905086 kubelet[2740]: E1009 07:14:53.905057 2740 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:53.905472 kubelet[2740]: E1009 07:14:53.905122 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xgjcs" Oct 9 07:14:53.905472 kubelet[2740]: E1009 07:14:53.905145 2740 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xgjcs" Oct 9 07:14:53.905472 kubelet[2740]: E1009 07:14:53.905203 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xgjcs_calico-system(f2ae45c7-48f4-4c14-9998-b19005636b8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xgjcs_calico-system(f2ae45c7-48f4-4c14-9998-b19005636b8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xgjcs" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" Oct 9 07:14:53.906982 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f-shm.mount: Deactivated successfully. Oct 9 07:14:53.962230 kubelet[2740]: I1009 07:14:53.962171 2740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:14:53.962791 containerd[1586]: time="2024-10-09T07:14:53.962746115Z" level=info msg="StopPodSandbox for \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\"" Oct 9 07:14:53.963038 containerd[1586]: time="2024-10-09T07:14:53.962990564Z" level=info msg="Ensure that sandbox 2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f in task-service has been cleanup successfully" Oct 9 07:14:53.990763 containerd[1586]: time="2024-10-09T07:14:53.990695945Z" level=error msg="StopPodSandbox for \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\" failed" error="failed to destroy network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:14:53.991053 kubelet[2740]: E1009 07:14:53.991006 2740 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:14:53.991114 kubelet[2740]: E1009 07:14:53.991061 2740 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f"} Oct 9 07:14:53.991114 kubelet[2740]: E1009 07:14:53.991094 2740 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2ae45c7-48f4-4c14-9998-b19005636b8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:14:53.991192 kubelet[2740]: E1009 07:14:53.991123 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2ae45c7-48f4-4c14-9998-b19005636b8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xgjcs" podUID="f2ae45c7-48f4-4c14-9998-b19005636b8c" Oct 9 07:14:54.537307 systemd[1]: Started sshd@8-10.0.0.30:22-10.0.0.1:41540.service - OpenSSH per-connection server daemon (10.0.0.1:41540). Oct 9 07:14:54.571516 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 41540 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:14:54.574038 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:14:54.579161 systemd-logind[1569]: New session 9 of user core. Oct 9 07:14:54.589209 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:14:54.714390 sshd[3786]: pam_unix(sshd:session): session closed for user core Oct 9 07:14:54.719120 systemd[1]: sshd@8-10.0.0.30:22-10.0.0.1:41540.service: Deactivated successfully. Oct 9 07:14:54.722404 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:14:54.723412 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:14:54.724352 systemd-logind[1569]: Removed session 9. Oct 9 07:14:58.126196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692954271.mount: Deactivated successfully. Oct 9 07:14:59.272736 containerd[1586]: time="2024-10-09T07:14:59.272681971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:59.273516 containerd[1586]: time="2024-10-09T07:14:59.273479067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:14:59.275735 containerd[1586]: time="2024-10-09T07:14:59.275703325Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:59.278567 containerd[1586]: time="2024-10-09T07:14:59.278537357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:14:59.279401 containerd[1586]: time="2024-10-09T07:14:59.279355203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.324707629s" Oct 9 07:14:59.279460 containerd[1586]: time="2024-10-09T07:14:59.279410707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:14:59.300573 containerd[1586]: time="2024-10-09T07:14:59.300536983Z" level=info msg="CreateContainer within sandbox \"5314ed7b02174a7703e252b150327b3d609bd1662b0a4fcb08295794e3ef6483\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:14:59.330290 containerd[1586]: time="2024-10-09T07:14:59.330237311Z" level=info msg="CreateContainer within sandbox \"5314ed7b02174a7703e252b150327b3d609bd1662b0a4fcb08295794e3ef6483\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a1e3f9eb927b68e94d1fbad36b496f7d3680987628e21215aa2e800cbe530b6e\"" Oct 9 07:14:59.331009 containerd[1586]: time="2024-10-09T07:14:59.330982219Z" level=info msg="StartContainer for \"a1e3f9eb927b68e94d1fbad36b496f7d3680987628e21215aa2e800cbe530b6e\"" Oct 9 07:14:59.431735 containerd[1586]: time="2024-10-09T07:14:59.431693948Z" level=info msg="StartContainer for \"a1e3f9eb927b68e94d1fbad36b496f7d3680987628e21215aa2e800cbe530b6e\" returns successfully" Oct 9 07:14:59.505355 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:14:59.506011 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:14:59.725385 systemd[1]: Started sshd@9-10.0.0.30:22-10.0.0.1:56362.service - OpenSSH per-connection server daemon (10.0.0.1:56362). Oct 9 07:14:59.763121 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 56362 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:14:59.765314 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:14:59.770476 systemd-logind[1569]: New session 10 of user core. Oct 9 07:14:59.779285 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:14:59.894333 sshd[3868]: pam_unix(sshd:session): session closed for user core Oct 9 07:14:59.899467 systemd[1]: sshd@9-10.0.0.30:22-10.0.0.1:56362.service: Deactivated successfully. Oct 9 07:14:59.902349 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:14:59.902397 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:14:59.903635 systemd-logind[1569]: Removed session 10. Oct 9 07:14:59.976806 kubelet[2740]: E1009 07:14:59.976600 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:14:59.992026 kubelet[2740]: I1009 07:14:59.991986 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9kxxf" podStartSLOduration=2.184085159 podStartE2EDuration="20.991940231s" podCreationTimestamp="2024-10-09 07:14:39 +0000 UTC" firstStartedPulling="2024-10-09 07:14:40.471894451 +0000 UTC m=+20.727228866" lastFinishedPulling="2024-10-09 07:14:59.279749523 +0000 UTC m=+39.535083938" observedRunningTime="2024-10-09 07:14:59.991900025 +0000 UTC m=+40.247234440" watchObservedRunningTime="2024-10-09 07:14:59.991940231 +0000 UTC m=+40.247274646" Oct 9 07:15:00.978142 kubelet[2740]: E1009 07:15:00.978095 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:01.185943 kernel: bpftool[4060]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:15:01.425263 systemd-networkd[1259]: vxlan.calico: Link UP Oct 9 07:15:01.425272 systemd-networkd[1259]: vxlan.calico: Gained carrier Oct 9 07:15:03.126131 systemd-networkd[1259]: vxlan.calico: Gained IPv6LL Oct 9 07:15:03.839305 containerd[1586]: time="2024-10-09T07:15:03.839170200Z" level=info msg="StopPodSandbox for \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\"" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.915 [INFO][4151] k8s.go 608: Cleaning up netns ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.915 [INFO][4151] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" iface="eth0" netns="/var/run/netns/cni-d34df803-beef-8640-902c-da89cf6575c9" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.916 [INFO][4151] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" iface="eth0" netns="/var/run/netns/cni-d34df803-beef-8640-902c-da89cf6575c9" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.916 [INFO][4151] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" iface="eth0" netns="/var/run/netns/cni-d34df803-beef-8640-902c-da89cf6575c9" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.916 [INFO][4151] k8s.go 615: Releasing IP address(es) ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.916 [INFO][4151] utils.go 188: Calico CNI releasing IP address ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.967 [INFO][4158] ipam_plugin.go 417: Releasing address using handleID ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" HandleID="k8s-pod-network.f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.968 [INFO][4158] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.968 [INFO][4158] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.976 [WARNING][4158] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" HandleID="k8s-pod-network.f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.976 [INFO][4158] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" HandleID="k8s-pod-network.f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.978 [INFO][4158] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:03.983685 containerd[1586]: 2024-10-09 07:15:03.981 [INFO][4151] k8s.go 621: Teardown processing complete. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:03.984433 containerd[1586]: time="2024-10-09T07:15:03.983904935Z" level=info msg="TearDown network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\" successfully" Oct 9 07:15:03.984433 containerd[1586]: time="2024-10-09T07:15:03.983963064Z" level=info msg="StopPodSandbox for \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\" returns successfully" Oct 9 07:15:03.987024 systemd[1]: run-netns-cni\x2dd34df803\x2dbeef\x2d8640\x2d902c\x2dda89cf6575c9.mount: Deactivated successfully. Oct 9 07:15:03.990883 containerd[1586]: time="2024-10-09T07:15:03.990834063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bbc98dcd-jk426,Uid:04d277c1-6044-4d8c-9a67-d2697166170d,Namespace:calico-system,Attempt:1,}" Oct 9 07:15:04.328405 systemd-networkd[1259]: cali4c936db9554: Link UP Oct 9 07:15:04.329190 systemd-networkd[1259]: cali4c936db9554: Gained carrier Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.264 [INFO][4166] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0 calico-kube-controllers-6bbc98dcd- calico-system 04d277c1-6044-4d8c-9a67-d2697166170d 785 0 2024-10-09 07:14:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bbc98dcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bbc98dcd-jk426 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4c936db9554 [] []}} ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Namespace="calico-system" Pod="calico-kube-controllers-6bbc98dcd-jk426" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.264 [INFO][4166] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Namespace="calico-system" Pod="calico-kube-controllers-6bbc98dcd-jk426" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.294 [INFO][4179] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" HandleID="k8s-pod-network.e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.301 [INFO][4179] ipam_plugin.go 270: Auto assigning IP ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" HandleID="k8s-pod-network.e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000580380), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bbc98dcd-jk426", "timestamp":"2024-10-09 07:15:04.294827657 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.301 [INFO][4179] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.302 [INFO][4179] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.302 [INFO][4179] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.303 [INFO][4179] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" host="localhost" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.307 [INFO][4179] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.311 [INFO][4179] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.312 [INFO][4179] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.314 [INFO][4179] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.314 [INFO][4179] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" host="localhost" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.315 [INFO][4179] ipam.go 1685: Creating new handle: k8s-pod-network.e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74 Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.318 [INFO][4179] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" host="localhost" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.322 [INFO][4179] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" host="localhost" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.323 [INFO][4179] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" host="localhost" Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.323 [INFO][4179] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:04.342108 containerd[1586]: 2024-10-09 07:15:04.323 [INFO][4179] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" HandleID="k8s-pod-network.e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:04.342769 containerd[1586]: 2024-10-09 07:15:04.326 [INFO][4166] k8s.go 386: Populated endpoint ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Namespace="calico-system" Pod="calico-kube-controllers-6bbc98dcd-jk426" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0", GenerateName:"calico-kube-controllers-6bbc98dcd-", Namespace:"calico-system", SelfLink:"", UID:"04d277c1-6044-4d8c-9a67-d2697166170d", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bbc98dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bbc98dcd-jk426", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c936db9554", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:04.342769 containerd[1586]: 2024-10-09 07:15:04.326 [INFO][4166] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Namespace="calico-system" Pod="calico-kube-controllers-6bbc98dcd-jk426" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:04.342769 containerd[1586]: 2024-10-09 07:15:04.326 [INFO][4166] dataplane_linux.go 68: Setting the host side veth name to cali4c936db9554 ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Namespace="calico-system" Pod="calico-kube-controllers-6bbc98dcd-jk426" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:04.342769 containerd[1586]: 2024-10-09 07:15:04.329 [INFO][4166] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Namespace="calico-system" Pod="calico-kube-controllers-6bbc98dcd-jk426" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:04.342769 containerd[1586]: 2024-10-09 07:15:04.329 [INFO][4166] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Namespace="calico-system" Pod="calico-kube-controllers-6bbc98dcd-jk426" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0", GenerateName:"calico-kube-controllers-6bbc98dcd-", Namespace:"calico-system", SelfLink:"", UID:"04d277c1-6044-4d8c-9a67-d2697166170d", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bbc98dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74", Pod:"calico-kube-controllers-6bbc98dcd-jk426", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c936db9554", MAC:"5a:73:bf:94:41:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:04.342769 containerd[1586]: 2024-10-09 07:15:04.337 [INFO][4166] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74" Namespace="calico-system" Pod="calico-kube-controllers-6bbc98dcd-jk426" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:04.381485 containerd[1586]: time="2024-10-09T07:15:04.381324458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:15:04.381485 containerd[1586]: time="2024-10-09T07:15:04.381374161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:04.381485 containerd[1586]: time="2024-10-09T07:15:04.381387556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:15:04.381485 containerd[1586]: time="2024-10-09T07:15:04.381396673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:04.405543 systemd-resolved[1479]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:15:04.432763 containerd[1586]: time="2024-10-09T07:15:04.432723854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bbc98dcd-jk426,Uid:04d277c1-6044-4d8c-9a67-d2697166170d,Namespace:calico-system,Attempt:1,} returns sandbox id \"e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74\"" Oct 9 07:15:04.434873 containerd[1586]: time="2024-10-09T07:15:04.434815541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:15:04.839875 containerd[1586]: time="2024-10-09T07:15:04.839783691Z" level=info msg="StopPodSandbox for \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\"" Oct 9 07:15:04.910369 systemd[1]: Started sshd@10-10.0.0.30:22-10.0.0.1:56376.service - OpenSSH per-connection server daemon (10.0.0.1:56376). Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.888 [INFO][4258] k8s.go 608: Cleaning up netns ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.888 [INFO][4258] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" iface="eth0" netns="/var/run/netns/cni-ffba146c-21ee-ed93-b60b-8d9011525b67" Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.888 [INFO][4258] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" iface="eth0" netns="/var/run/netns/cni-ffba146c-21ee-ed93-b60b-8d9011525b67" Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.889 [INFO][4258] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" iface="eth0" netns="/var/run/netns/cni-ffba146c-21ee-ed93-b60b-8d9011525b67" Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.889 [INFO][4258] k8s.go 615: Releasing IP address(es) ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.889 [INFO][4258] utils.go 188: Calico CNI releasing IP address ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.908 [INFO][4265] ipam_plugin.go 417: Releasing address using handleID ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" HandleID="k8s-pod-network.e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.909 [INFO][4265] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.909 [INFO][4265] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.914 [WARNING][4265] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" HandleID="k8s-pod-network.e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.914 [INFO][4265] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" HandleID="k8s-pod-network.e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.915 [INFO][4265] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:04.920824 containerd[1586]: 2024-10-09 07:15:04.918 [INFO][4258] k8s.go 621: Teardown processing complete. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:04.921360 containerd[1586]: time="2024-10-09T07:15:04.921090910Z" level=info msg="TearDown network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\" successfully" Oct 9 07:15:04.921360 containerd[1586]: time="2024-10-09T07:15:04.921119563Z" level=info msg="StopPodSandbox for \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\" returns successfully" Oct 9 07:15:04.921487 kubelet[2740]: E1009 07:15:04.921460 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:04.922266 containerd[1586]: time="2024-10-09T07:15:04.922223145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t6jwx,Uid:781aa2fc-41fc-40b3-9700-ecfd097a2855,Namespace:kube-system,Attempt:1,}" Oct 9 07:15:04.948267 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 56376 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:04.951141 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:04.960286 systemd-logind[1569]: New session 11 of user core. Oct 9 07:15:04.963537 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:15:04.988618 systemd[1]: run-netns-cni\x2dffba146c\x2d21ee\x2ded93\x2db60b\x2d8d9011525b67.mount: Deactivated successfully. Oct 9 07:15:05.039315 systemd-networkd[1259]: calie346df2c1c9: Link UP Oct 9 07:15:05.039626 systemd-networkd[1259]: calie346df2c1c9: Gained carrier Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:04.969 [INFO][4275] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--t6jwx-eth0 coredns-76f75df574- kube-system 781aa2fc-41fc-40b3-9700-ecfd097a2855 793 0 2024-10-09 07:14:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-t6jwx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie346df2c1c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Namespace="kube-system" Pod="coredns-76f75df574-t6jwx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t6jwx-" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:04.970 [INFO][4275] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Namespace="kube-system" Pod="coredns-76f75df574-t6jwx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.001 [INFO][4291] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" HandleID="k8s-pod-network.f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.009 [INFO][4291] ipam_plugin.go 270: Auto assigning IP ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" HandleID="k8s-pod-network.f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035de30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-t6jwx", "timestamp":"2024-10-09 07:15:05.00144523 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.009 [INFO][4291] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.009 [INFO][4291] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.009 [INFO][4291] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.010 [INFO][4291] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" host="localhost" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.014 [INFO][4291] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.018 [INFO][4291] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.020 [INFO][4291] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.022 [INFO][4291] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.022 [INFO][4291] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" host="localhost" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.024 [INFO][4291] ipam.go 1685: Creating new handle: k8s-pod-network.f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.027 [INFO][4291] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" host="localhost" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.033 [INFO][4291] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" host="localhost" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.033 [INFO][4291] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" host="localhost" Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.033 [INFO][4291] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:05.054674 containerd[1586]: 2024-10-09 07:15:05.033 [INFO][4291] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" HandleID="k8s-pod-network.f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:05.055231 containerd[1586]: 2024-10-09 07:15:05.036 [INFO][4275] k8s.go 386: Populated endpoint ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Namespace="kube-system" Pod="coredns-76f75df574-t6jwx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t6jwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--t6jwx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"781aa2fc-41fc-40b3-9700-ecfd097a2855", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-t6jwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie346df2c1c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:05.055231 containerd[1586]: 2024-10-09 07:15:05.036 [INFO][4275] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Namespace="kube-system" Pod="coredns-76f75df574-t6jwx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:05.055231 containerd[1586]: 2024-10-09 07:15:05.036 [INFO][4275] dataplane_linux.go 68: Setting the host side veth name to calie346df2c1c9 ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Namespace="kube-system" Pod="coredns-76f75df574-t6jwx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:05.055231 containerd[1586]: 2024-10-09 07:15:05.039 [INFO][4275] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Namespace="kube-system" Pod="coredns-76f75df574-t6jwx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:05.055231 containerd[1586]: 2024-10-09 07:15:05.039 [INFO][4275] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Namespace="kube-system" Pod="coredns-76f75df574-t6jwx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t6jwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--t6jwx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"781aa2fc-41fc-40b3-9700-ecfd097a2855", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce", Pod:"coredns-76f75df574-t6jwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie346df2c1c9", MAC:"ae:9a:59:1c:97:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:05.055231 containerd[1586]: 2024-10-09 07:15:05.051 [INFO][4275] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce" Namespace="kube-system" Pod="coredns-76f75df574-t6jwx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:05.079977 containerd[1586]: time="2024-10-09T07:15:05.079854438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:15:05.079977 containerd[1586]: time="2024-10-09T07:15:05.079908559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:05.079977 containerd[1586]: time="2024-10-09T07:15:05.079939778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:15:05.079977 containerd[1586]: time="2024-10-09T07:15:05.079949215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:05.101961 sshd[4271]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:05.105626 systemd[1]: Started sshd@11-10.0.0.30:22-10.0.0.1:56382.service - OpenSSH per-connection server daemon (10.0.0.1:56382). Oct 9 07:15:05.110362 systemd[1]: sshd@10-10.0.0.30:22-10.0.0.1:56376.service: Deactivated successfully. Oct 9 07:15:05.112013 systemd-resolved[1479]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:15:05.113632 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:15:05.113996 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:15:05.116560 systemd-logind[1569]: Removed session 11. Oct 9 07:15:05.145173 containerd[1586]: time="2024-10-09T07:15:05.145132378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t6jwx,Uid:781aa2fc-41fc-40b3-9700-ecfd097a2855,Namespace:kube-system,Attempt:1,} returns sandbox id \"f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce\"" Oct 9 07:15:05.145831 kubelet[2740]: E1009 07:15:05.145804 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:05.148197 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 56382 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:05.148780 containerd[1586]: time="2024-10-09T07:15:05.148745209Z" level=info msg="CreateContainer within sandbox \"f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:15:05.150123 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:05.156392 systemd-logind[1569]: New session 12 of user core. Oct 9 07:15:05.163312 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:15:05.178140 containerd[1586]: time="2024-10-09T07:15:05.178086575Z" level=info msg="CreateContainer within sandbox \"f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"034df67cb470ebadebd1fb5516ae9d5ee13a66a639a063d3b71cd97a18870d07\"" Oct 9 07:15:05.178763 containerd[1586]: time="2024-10-09T07:15:05.178715324Z" level=info msg="StartContainer for \"034df67cb470ebadebd1fb5516ae9d5ee13a66a639a063d3b71cd97a18870d07\"" Oct 9 07:15:05.241970 containerd[1586]: time="2024-10-09T07:15:05.241925984Z" level=info msg="StartContainer for \"034df67cb470ebadebd1fb5516ae9d5ee13a66a639a063d3b71cd97a18870d07\" returns successfully" Oct 9 07:15:05.313594 sshd[4355]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:05.320200 systemd[1]: Started sshd@12-10.0.0.30:22-10.0.0.1:56396.service - OpenSSH per-connection server daemon (10.0.0.1:56396). Oct 9 07:15:05.320730 systemd[1]: sshd@11-10.0.0.30:22-10.0.0.1:56382.service: Deactivated successfully. Oct 9 07:15:05.326580 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:15:05.328335 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:15:05.330401 systemd-logind[1569]: Removed session 12. Oct 9 07:15:05.363377 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 56396 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:05.365180 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:05.369781 systemd-logind[1569]: New session 13 of user core. Oct 9 07:15:05.375389 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:15:05.490225 sshd[4411]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:05.494773 systemd[1]: sshd@12-10.0.0.30:22-10.0.0.1:56396.service: Deactivated successfully. Oct 9 07:15:05.497437 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:15:05.497539 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:15:05.498689 systemd-logind[1569]: Removed session 13. Oct 9 07:15:05.839946 containerd[1586]: time="2024-10-09T07:15:05.839476105Z" level=info msg="StopPodSandbox for \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\"" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.890 [INFO][4449] k8s.go 608: Cleaning up netns ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.891 [INFO][4449] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" iface="eth0" netns="/var/run/netns/cni-57ada94d-b82f-b1e8-23d4-5372a2f2744b" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.891 [INFO][4449] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" iface="eth0" netns="/var/run/netns/cni-57ada94d-b82f-b1e8-23d4-5372a2f2744b" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.891 [INFO][4449] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" iface="eth0" netns="/var/run/netns/cni-57ada94d-b82f-b1e8-23d4-5372a2f2744b" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.891 [INFO][4449] k8s.go 615: Releasing IP address(es) ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.891 [INFO][4449] utils.go 188: Calico CNI releasing IP address ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.913 [INFO][4456] ipam_plugin.go 417: Releasing address using handleID ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" HandleID="k8s-pod-network.2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.913 [INFO][4456] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.913 [INFO][4456] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.921 [WARNING][4456] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" HandleID="k8s-pod-network.2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.921 [INFO][4456] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" HandleID="k8s-pod-network.2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.923 [INFO][4456] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:05.928552 containerd[1586]: 2024-10-09 07:15:05.925 [INFO][4449] k8s.go 621: Teardown processing complete. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:05.929085 containerd[1586]: time="2024-10-09T07:15:05.928822843Z" level=info msg="TearDown network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\" successfully" Oct 9 07:15:05.929085 containerd[1586]: time="2024-10-09T07:15:05.928857618Z" level=info msg="StopPodSandbox for \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\" returns successfully" Oct 9 07:15:05.929545 containerd[1586]: time="2024-10-09T07:15:05.929506697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xgjcs,Uid:f2ae45c7-48f4-4c14-9998-b19005636b8c,Namespace:calico-system,Attempt:1,}" Oct 9 07:15:05.990658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558913078.mount: Deactivated successfully. Oct 9 07:15:05.991232 systemd[1]: run-netns-cni\x2d57ada94d\x2db82f\x2db1e8\x2d23d4\x2d5372a2f2744b.mount: Deactivated successfully. Oct 9 07:15:05.995795 kubelet[2740]: E1009 07:15:05.995590 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:06.007085 kubelet[2740]: I1009 07:15:06.006972 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-t6jwx" podStartSLOduration=32.006937578 podStartE2EDuration="32.006937578s" podCreationTimestamp="2024-10-09 07:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:15:06.006447548 +0000 UTC m=+46.261781963" watchObservedRunningTime="2024-10-09 07:15:06.006937578 +0000 UTC m=+46.262271993" Oct 9 07:15:06.058908 systemd-networkd[1259]: cali94062497b53: Link UP Oct 9 07:15:06.059866 systemd-networkd[1259]: cali94062497b53: Gained carrier Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:05.971 [INFO][4465] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xgjcs-eth0 csi-node-driver- calico-system f2ae45c7-48f4-4c14-9998-b19005636b8c 820 0 2024-10-09 07:14:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-xgjcs eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali94062497b53 [] []}} ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Namespace="calico-system" Pod="csi-node-driver-xgjcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--xgjcs-" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:05.971 [INFO][4465] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Namespace="calico-system" Pod="csi-node-driver-xgjcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.003 [INFO][4478] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" HandleID="k8s-pod-network.a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.013 [INFO][4478] ipam_plugin.go 270: Auto assigning IP ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" HandleID="k8s-pod-network.a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xgjcs", "timestamp":"2024-10-09 07:15:06.003887563 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.013 [INFO][4478] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.013 [INFO][4478] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.013 [INFO][4478] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.019 [INFO][4478] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" host="localhost" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.028 [INFO][4478] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.034 [INFO][4478] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.038 [INFO][4478] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.040 [INFO][4478] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.040 [INFO][4478] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" host="localhost" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.042 [INFO][4478] ipam.go 1685: Creating new handle: k8s-pod-network.a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2 Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.046 [INFO][4478] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" host="localhost" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.053 [INFO][4478] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" host="localhost" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.053 [INFO][4478] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" host="localhost" Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.053 [INFO][4478] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:06.070470 containerd[1586]: 2024-10-09 07:15:06.053 [INFO][4478] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" HandleID="k8s-pod-network.a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:06.071331 containerd[1586]: 2024-10-09 07:15:06.056 [INFO][4465] k8s.go 386: Populated endpoint ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Namespace="calico-system" Pod="csi-node-driver-xgjcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--xgjcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xgjcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2ae45c7-48f4-4c14-9998-b19005636b8c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xgjcs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali94062497b53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:06.071331 containerd[1586]: 2024-10-09 07:15:06.056 [INFO][4465] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Namespace="calico-system" Pod="csi-node-driver-xgjcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:06.071331 containerd[1586]: 2024-10-09 07:15:06.056 [INFO][4465] dataplane_linux.go 68: Setting the host side veth name to cali94062497b53 ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Namespace="calico-system" Pod="csi-node-driver-xgjcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:06.071331 containerd[1586]: 2024-10-09 07:15:06.058 [INFO][4465] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Namespace="calico-system" Pod="csi-node-driver-xgjcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:06.071331 containerd[1586]: 2024-10-09 07:15:06.058 [INFO][4465] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Namespace="calico-system" Pod="csi-node-driver-xgjcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--xgjcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xgjcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2ae45c7-48f4-4c14-9998-b19005636b8c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2", Pod:"csi-node-driver-xgjcs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali94062497b53", MAC:"2e:b6:0a:ee:99:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:06.071331 containerd[1586]: 2024-10-09 07:15:06.067 [INFO][4465] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2" Namespace="calico-system" Pod="csi-node-driver-xgjcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:06.155727 containerd[1586]: time="2024-10-09T07:15:06.155484198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:15:06.155727 containerd[1586]: time="2024-10-09T07:15:06.155550042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:06.155727 containerd[1586]: time="2024-10-09T07:15:06.155567685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:15:06.155727 containerd[1586]: time="2024-10-09T07:15:06.155582853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:06.191454 systemd-resolved[1479]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:15:06.205273 containerd[1586]: time="2024-10-09T07:15:06.205225860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xgjcs,Uid:f2ae45c7-48f4-4c14-9998-b19005636b8c,Namespace:calico-system,Attempt:1,} returns sandbox id \"a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2\"" Oct 9 07:15:06.264256 systemd-networkd[1259]: cali4c936db9554: Gained IPv6LL Oct 9 07:15:06.582063 systemd-networkd[1259]: calie346df2c1c9: Gained IPv6LL Oct 9 07:15:06.998536 kubelet[2740]: E1009 07:15:06.998490 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:07.100668 containerd[1586]: time="2024-10-09T07:15:07.100607638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:07.101456 containerd[1586]: time="2024-10-09T07:15:07.101422327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:15:07.108106 containerd[1586]: time="2024-10-09T07:15:07.108073040Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:07.110205 containerd[1586]: time="2024-10-09T07:15:07.110178491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:07.117022 containerd[1586]: time="2024-10-09T07:15:07.116961020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.682104513s" Oct 9 07:15:07.117022 containerd[1586]: time="2024-10-09T07:15:07.117013018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:15:07.117709 containerd[1586]: time="2024-10-09T07:15:07.117669830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:15:07.126035 containerd[1586]: time="2024-10-09T07:15:07.125992360Z" level=info msg="CreateContainer within sandbox \"e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:15:07.141218 containerd[1586]: time="2024-10-09T07:15:07.141171219Z" level=info msg="CreateContainer within sandbox \"e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7d67401d782f60c180d70a05de5fde8df5c9252087a8ac68c5c71e685014e83c\"" Oct 9 07:15:07.141879 containerd[1586]: time="2024-10-09T07:15:07.141836487Z" level=info msg="StartContainer for \"7d67401d782f60c180d70a05de5fde8df5c9252087a8ac68c5c71e685014e83c\"" Oct 9 07:15:07.211225 containerd[1586]: time="2024-10-09T07:15:07.211179670Z" level=info msg="StartContainer for \"7d67401d782f60c180d70a05de5fde8df5c9252087a8ac68c5c71e685014e83c\" returns successfully" Oct 9 07:15:07.287269 systemd-networkd[1259]: cali94062497b53: Gained IPv6LL Oct 9 07:15:07.839791 containerd[1586]: time="2024-10-09T07:15:07.839675783Z" level=info msg="StopPodSandbox for \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\"" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.896 [INFO][4610] k8s.go 608: Cleaning up netns ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.897 [INFO][4610] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" iface="eth0" netns="/var/run/netns/cni-c27e4b21-6d33-c579-303c-0550cdc373e5" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.897 [INFO][4610] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" iface="eth0" netns="/var/run/netns/cni-c27e4b21-6d33-c579-303c-0550cdc373e5" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.897 [INFO][4610] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" iface="eth0" netns="/var/run/netns/cni-c27e4b21-6d33-c579-303c-0550cdc373e5" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.897 [INFO][4610] k8s.go 615: Releasing IP address(es) ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.897 [INFO][4610] utils.go 188: Calico CNI releasing IP address ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.920 [INFO][4618] ipam_plugin.go 417: Releasing address using handleID ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" HandleID="k8s-pod-network.5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.920 [INFO][4618] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.920 [INFO][4618] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.925 [WARNING][4618] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" HandleID="k8s-pod-network.5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.925 [INFO][4618] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" HandleID="k8s-pod-network.5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.927 [INFO][4618] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:07.933123 containerd[1586]: 2024-10-09 07:15:07.930 [INFO][4610] k8s.go 621: Teardown processing complete. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:07.933685 containerd[1586]: time="2024-10-09T07:15:07.933268647Z" level=info msg="TearDown network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\" successfully" Oct 9 07:15:07.933685 containerd[1586]: time="2024-10-09T07:15:07.933295127Z" level=info msg="StopPodSandbox for \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\" returns successfully" Oct 9 07:15:07.933754 kubelet[2740]: E1009 07:15:07.933669 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:07.934092 containerd[1586]: time="2024-10-09T07:15:07.934047620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljpqg,Uid:6da8c6fb-d852-4cac-a809-b18748e45975,Namespace:kube-system,Attempt:1,}" Oct 9 07:15:08.002905 kubelet[2740]: E1009 07:15:08.002875 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:08.069829 systemd-networkd[1259]: cali72b628c8584: Link UP Oct 9 07:15:08.070076 systemd-networkd[1259]: cali72b628c8584: Gained carrier Oct 9 07:15:08.079860 kubelet[2740]: I1009 07:15:08.079037 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bbc98dcd-jk426" podStartSLOduration=25.395861824 podStartE2EDuration="28.07899044s" podCreationTimestamp="2024-10-09 07:14:40 +0000 UTC" firstStartedPulling="2024-10-09 07:15:04.434241032 +0000 UTC m=+44.689575447" lastFinishedPulling="2024-10-09 07:15:07.117369648 +0000 UTC m=+47.372704063" observedRunningTime="2024-10-09 07:15:08.013344684 +0000 UTC m=+48.268679099" watchObservedRunningTime="2024-10-09 07:15:08.07899044 +0000 UTC m=+48.334324855" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.007 [INFO][4626] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--ljpqg-eth0 coredns-76f75df574- kube-system 6da8c6fb-d852-4cac-a809-b18748e45975 850 0 2024-10-09 07:14:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-ljpqg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali72b628c8584 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Namespace="kube-system" Pod="coredns-76f75df574-ljpqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ljpqg-" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.007 [INFO][4626] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Namespace="kube-system" Pod="coredns-76f75df574-ljpqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.034 [INFO][4639] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" HandleID="k8s-pod-network.a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.042 [INFO][4639] ipam_plugin.go 270: Auto assigning IP ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" HandleID="k8s-pod-network.a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308320), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-ljpqg", "timestamp":"2024-10-09 07:15:08.034771555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.042 [INFO][4639] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.042 [INFO][4639] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.043 [INFO][4639] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.044 [INFO][4639] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" host="localhost" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.048 [INFO][4639] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.051 [INFO][4639] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.053 [INFO][4639] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.054 [INFO][4639] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.054 [INFO][4639] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" host="localhost" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.056 [INFO][4639] ipam.go 1685: Creating new handle: k8s-pod-network.a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6 Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.058 [INFO][4639] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" host="localhost" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.064 [INFO][4639] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" host="localhost" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.064 [INFO][4639] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" host="localhost" Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.064 [INFO][4639] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:08.083679 containerd[1586]: 2024-10-09 07:15:08.064 [INFO][4639] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" HandleID="k8s-pod-network.a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:08.084238 containerd[1586]: 2024-10-09 07:15:08.067 [INFO][4626] k8s.go 386: Populated endpoint ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Namespace="kube-system" Pod="coredns-76f75df574-ljpqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ljpqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--ljpqg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6da8c6fb-d852-4cac-a809-b18748e45975", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-ljpqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72b628c8584", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:08.084238 containerd[1586]: 2024-10-09 07:15:08.067 [INFO][4626] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Namespace="kube-system" Pod="coredns-76f75df574-ljpqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:08.084238 containerd[1586]: 2024-10-09 07:15:08.067 [INFO][4626] dataplane_linux.go 68: Setting the host side veth name to cali72b628c8584 ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Namespace="kube-system" Pod="coredns-76f75df574-ljpqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:08.084238 containerd[1586]: 2024-10-09 07:15:08.069 [INFO][4626] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Namespace="kube-system" Pod="coredns-76f75df574-ljpqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:08.084238 containerd[1586]: 2024-10-09 07:15:08.070 [INFO][4626] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Namespace="kube-system" Pod="coredns-76f75df574-ljpqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ljpqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--ljpqg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6da8c6fb-d852-4cac-a809-b18748e45975", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6", Pod:"coredns-76f75df574-ljpqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72b628c8584", MAC:"3a:82:84:3a:f3:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:08.084238 containerd[1586]: 2024-10-09 07:15:08.079 [INFO][4626] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6" Namespace="kube-system" Pod="coredns-76f75df574-ljpqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:08.107629 containerd[1586]: time="2024-10-09T07:15:08.107249493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:15:08.107629 containerd[1586]: time="2024-10-09T07:15:08.107369849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:08.107629 containerd[1586]: time="2024-10-09T07:15:08.107392292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:15:08.107629 containerd[1586]: time="2024-10-09T07:15:08.107406608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:08.126039 systemd[1]: run-netns-cni\x2dc27e4b21\x2d6d33\x2dc579\x2d303c\x2d0550cdc373e5.mount: Deactivated successfully. Oct 9 07:15:08.135774 systemd-resolved[1479]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:15:08.163952 containerd[1586]: time="2024-10-09T07:15:08.163887785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljpqg,Uid:6da8c6fb-d852-4cac-a809-b18748e45975,Namespace:kube-system,Attempt:1,} returns sandbox id \"a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6\"" Oct 9 07:15:08.164771 kubelet[2740]: E1009 07:15:08.164676 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:08.167392 containerd[1586]: time="2024-10-09T07:15:08.167343912Z" level=info msg="CreateContainer within sandbox \"a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:15:08.180626 containerd[1586]: time="2024-10-09T07:15:08.180561017Z" level=info msg="CreateContainer within sandbox \"a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99400c3dbfb5108075dfa71fbc58508d8c0dff2ee92ecab1623993d99762866d\"" Oct 9 07:15:08.181475 containerd[1586]: time="2024-10-09T07:15:08.181027912Z" level=info msg="StartContainer for \"99400c3dbfb5108075dfa71fbc58508d8c0dff2ee92ecab1623993d99762866d\"" Oct 9 07:15:08.240884 containerd[1586]: time="2024-10-09T07:15:08.240828480Z" level=info msg="StartContainer for \"99400c3dbfb5108075dfa71fbc58508d8c0dff2ee92ecab1623993d99762866d\" returns successfully" Oct 9 07:15:08.851939 containerd[1586]: time="2024-10-09T07:15:08.851871918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:08.852639 containerd[1586]: time="2024-10-09T07:15:08.852602930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:15:08.853651 containerd[1586]: time="2024-10-09T07:15:08.853617685Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:08.855706 containerd[1586]: time="2024-10-09T07:15:08.855665328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:08.856454 containerd[1586]: time="2024-10-09T07:15:08.856398954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.738698306s" Oct 9 07:15:08.856454 containerd[1586]: time="2024-10-09T07:15:08.856436655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:15:08.857932 containerd[1586]: time="2024-10-09T07:15:08.857884382Z" level=info msg="CreateContainer within sandbox \"a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:15:08.879292 containerd[1586]: time="2024-10-09T07:15:08.879244458Z" level=info msg="CreateContainer within sandbox \"a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fe64d89b4834b8c4903d16e896517bfea7d37807cba66d859838e9d4f3028a4b\"" Oct 9 07:15:08.879758 containerd[1586]: time="2024-10-09T07:15:08.879722465Z" level=info msg="StartContainer for \"fe64d89b4834b8c4903d16e896517bfea7d37807cba66d859838e9d4f3028a4b\"" Oct 9 07:15:09.029149 containerd[1586]: time="2024-10-09T07:15:09.029098765Z" level=info msg="StartContainer for \"fe64d89b4834b8c4903d16e896517bfea7d37807cba66d859838e9d4f3028a4b\" returns successfully" Oct 9 07:15:09.030528 containerd[1586]: time="2024-10-09T07:15:09.030256808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:15:09.032200 kubelet[2740]: E1009 07:15:09.032169 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:09.055942 kubelet[2740]: I1009 07:15:09.055852 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ljpqg" podStartSLOduration=35.055774916 podStartE2EDuration="35.055774916s" podCreationTimestamp="2024-10-09 07:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:15:09.05455156 +0000 UTC m=+49.309885965" watchObservedRunningTime="2024-10-09 07:15:09.055774916 +0000 UTC m=+49.311109341" Oct 9 07:15:09.782093 systemd-networkd[1259]: cali72b628c8584: Gained IPv6LL Oct 9 07:15:10.035523 kubelet[2740]: E1009 07:15:10.035369 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:10.510254 systemd[1]: Started sshd@13-10.0.0.30:22-10.0.0.1:48818.service - OpenSSH per-connection server daemon (10.0.0.1:48818). Oct 9 07:15:10.550820 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 48818 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:10.552552 sshd[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:10.556880 systemd-logind[1569]: New session 14 of user core. Oct 9 07:15:10.563332 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:15:10.695662 sshd[4800]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:10.700739 systemd[1]: sshd@13-10.0.0.30:22-10.0.0.1:48818.service: Deactivated successfully. Oct 9 07:15:10.703481 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:15:10.703665 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:15:10.705672 systemd-logind[1569]: Removed session 14. Oct 9 07:15:11.037616 kubelet[2740]: E1009 07:15:11.037582 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:11.523182 containerd[1586]: time="2024-10-09T07:15:11.523135855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:11.524108 containerd[1586]: time="2024-10-09T07:15:11.524048166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:15:11.525387 containerd[1586]: time="2024-10-09T07:15:11.525361872Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:11.527364 containerd[1586]: time="2024-10-09T07:15:11.527329213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:11.527934 containerd[1586]: time="2024-10-09T07:15:11.527875878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.497577021s" Oct 9 07:15:11.527975 containerd[1586]: time="2024-10-09T07:15:11.527936372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:15:11.529577 containerd[1586]: time="2024-10-09T07:15:11.529552675Z" level=info msg="CreateContainer within sandbox \"a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:15:11.544885 containerd[1586]: time="2024-10-09T07:15:11.544847454Z" level=info msg="CreateContainer within sandbox \"a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"91eaa6333887a0d071da007cb195c734232981d4e52db4d9a7c4b4615e960490\"" Oct 9 07:15:11.545297 containerd[1586]: time="2024-10-09T07:15:11.545267553Z" level=info msg="StartContainer for \"91eaa6333887a0d071da007cb195c734232981d4e52db4d9a7c4b4615e960490\"" Oct 9 07:15:11.608876 containerd[1586]: time="2024-10-09T07:15:11.608837775Z" level=info msg="StartContainer for \"91eaa6333887a0d071da007cb195c734232981d4e52db4d9a7c4b4615e960490\" returns successfully" Oct 9 07:15:11.928815 kubelet[2740]: I1009 07:15:11.928784 2740 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:15:11.929927 kubelet[2740]: I1009 07:15:11.929893 2740 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:15:12.051382 kubelet[2740]: I1009 07:15:12.051345 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-xgjcs" podStartSLOduration=26.729340019 podStartE2EDuration="32.051307216s" podCreationTimestamp="2024-10-09 07:14:40 +0000 UTC" firstStartedPulling="2024-10-09 07:15:06.206212712 +0000 UTC m=+46.461547127" lastFinishedPulling="2024-10-09 07:15:11.528179909 +0000 UTC m=+51.783514324" observedRunningTime="2024-10-09 07:15:12.050622041 +0000 UTC m=+52.305956456" watchObservedRunningTime="2024-10-09 07:15:12.051307216 +0000 UTC m=+52.306641631" Oct 9 07:15:15.712279 systemd[1]: Started sshd@14-10.0.0.30:22-10.0.0.1:48830.service - OpenSSH per-connection server daemon (10.0.0.1:48830). Oct 9 07:15:15.749784 sshd[4875]: Accepted publickey for core from 10.0.0.1 port 48830 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:15.751786 sshd[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:15.756277 systemd-logind[1569]: New session 15 of user core. Oct 9 07:15:15.763231 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:15:15.889092 sshd[4875]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:15.892805 systemd[1]: sshd@14-10.0.0.30:22-10.0.0.1:48830.service: Deactivated successfully. Oct 9 07:15:15.895127 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:15:15.895206 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:15:15.896322 systemd-logind[1569]: Removed session 15. Oct 9 07:15:16.015292 kubelet[2740]: E1009 07:15:16.015150 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:19.835943 containerd[1586]: time="2024-10-09T07:15:19.835887680Z" level=info msg="StopPodSandbox for \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\"" Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.872 [WARNING][4930] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--t6jwx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"781aa2fc-41fc-40b3-9700-ecfd097a2855", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce", Pod:"coredns-76f75df574-t6jwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie346df2c1c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.872 [INFO][4930] k8s.go 608: Cleaning up netns ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.872 [INFO][4930] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" iface="eth0" netns="" Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.872 [INFO][4930] k8s.go 615: Releasing IP address(es) ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.872 [INFO][4930] utils.go 188: Calico CNI releasing IP address ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.894 [INFO][4938] ipam_plugin.go 417: Releasing address using handleID ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" HandleID="k8s-pod-network.e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.894 [INFO][4938] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.894 [INFO][4938] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.899 [WARNING][4938] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" HandleID="k8s-pod-network.e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.899 [INFO][4938] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" HandleID="k8s-pod-network.e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.900 [INFO][4938] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:19.906199 containerd[1586]: 2024-10-09 07:15:19.903 [INFO][4930] k8s.go 621: Teardown processing complete. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:19.906786 containerd[1586]: time="2024-10-09T07:15:19.906250472Z" level=info msg="TearDown network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\" successfully" Oct 9 07:15:19.906786 containerd[1586]: time="2024-10-09T07:15:19.906276823Z" level=info msg="StopPodSandbox for \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\" returns successfully" Oct 9 07:15:19.907192 containerd[1586]: time="2024-10-09T07:15:19.907133631Z" level=info msg="RemovePodSandbox for \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\"" Oct 9 07:15:19.910370 containerd[1586]: time="2024-10-09T07:15:19.910335950Z" level=info msg="Forcibly stopping sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\"" Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.954 [WARNING][4960] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--t6jwx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"781aa2fc-41fc-40b3-9700-ecfd097a2855", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f73c4b491ab80fa853eb8c8e5102bdb2216fa591f32adedf497f33e988516dce", Pod:"coredns-76f75df574-t6jwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie346df2c1c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.955 [INFO][4960] k8s.go 608: Cleaning up netns ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.955 [INFO][4960] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" iface="eth0" netns="" Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.955 [INFO][4960] k8s.go 615: Releasing IP address(es) ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.955 [INFO][4960] utils.go 188: Calico CNI releasing IP address ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.976 [INFO][4967] ipam_plugin.go 417: Releasing address using handleID ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" HandleID="k8s-pod-network.e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.976 [INFO][4967] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.976 [INFO][4967] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.980 [WARNING][4967] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" HandleID="k8s-pod-network.e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.980 [INFO][4967] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" HandleID="k8s-pod-network.e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Workload="localhost-k8s-coredns--76f75df574--t6jwx-eth0" Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.982 [INFO][4967] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:19.987545 containerd[1586]: 2024-10-09 07:15:19.985 [INFO][4960] k8s.go 621: Teardown processing complete. ContainerID="e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e" Oct 9 07:15:19.988031 containerd[1586]: time="2024-10-09T07:15:19.987589509Z" level=info msg="TearDown network for sandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\" successfully" Oct 9 07:15:20.005532 containerd[1586]: time="2024-10-09T07:15:20.005485854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:15:20.011046 containerd[1586]: time="2024-10-09T07:15:20.011013094Z" level=info msg="RemovePodSandbox \"e795022bc9e887eee14a9c2e7e601c4395f06b2e43b05e1c394931d93305121e\" returns successfully" Oct 9 07:15:20.011603 containerd[1586]: time="2024-10-09T07:15:20.011552607Z" level=info msg="StopPodSandbox for \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\"" Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.052 [WARNING][4990] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xgjcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2ae45c7-48f4-4c14-9998-b19005636b8c", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2", Pod:"csi-node-driver-xgjcs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali94062497b53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.052 [INFO][4990] k8s.go 608: Cleaning up netns ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.052 [INFO][4990] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" iface="eth0" netns="" Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.052 [INFO][4990] k8s.go 615: Releasing IP address(es) ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.052 [INFO][4990] utils.go 188: Calico CNI releasing IP address ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.074 [INFO][4998] ipam_plugin.go 417: Releasing address using handleID ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" HandleID="k8s-pod-network.2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.074 [INFO][4998] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.074 [INFO][4998] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.080 [WARNING][4998] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" HandleID="k8s-pod-network.2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.080 [INFO][4998] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" HandleID="k8s-pod-network.2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.081 [INFO][4998] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:20.087409 containerd[1586]: 2024-10-09 07:15:20.084 [INFO][4990] k8s.go 621: Teardown processing complete. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:20.088224 containerd[1586]: time="2024-10-09T07:15:20.087392482Z" level=info msg="TearDown network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\" successfully" Oct 9 07:15:20.088224 containerd[1586]: time="2024-10-09T07:15:20.087427440Z" level=info msg="StopPodSandbox for \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\" returns successfully" Oct 9 07:15:20.088224 containerd[1586]: time="2024-10-09T07:15:20.087957325Z" level=info msg="RemovePodSandbox for \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\"" Oct 9 07:15:20.088224 containerd[1586]: time="2024-10-09T07:15:20.087986772Z" level=info msg="Forcibly stopping sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\"" Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.122 [WARNING][5021] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xgjcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2ae45c7-48f4-4c14-9998-b19005636b8c", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a641cce979db390123b972081cc8513e0caa15d650493841936e94425a1affd2", Pod:"csi-node-driver-xgjcs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali94062497b53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.123 [INFO][5021] k8s.go 608: Cleaning up netns ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.123 [INFO][5021] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" iface="eth0" netns="" Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.123 [INFO][5021] k8s.go 615: Releasing IP address(es) ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.123 [INFO][5021] utils.go 188: Calico CNI releasing IP address ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.143 [INFO][5028] ipam_plugin.go 417: Releasing address using handleID ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" HandleID="k8s-pod-network.2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.143 [INFO][5028] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.143 [INFO][5028] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.148 [WARNING][5028] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" HandleID="k8s-pod-network.2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.148 [INFO][5028] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" HandleID="k8s-pod-network.2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Workload="localhost-k8s-csi--node--driver--xgjcs-eth0" Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.149 [INFO][5028] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:20.155431 containerd[1586]: 2024-10-09 07:15:20.152 [INFO][5021] k8s.go 621: Teardown processing complete. ContainerID="2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f" Oct 9 07:15:20.155891 containerd[1586]: time="2024-10-09T07:15:20.155485566Z" level=info msg="TearDown network for sandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\" successfully" Oct 9 07:15:20.163777 containerd[1586]: time="2024-10-09T07:15:20.163732065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:15:20.163839 containerd[1586]: time="2024-10-09T07:15:20.163796940Z" level=info msg="RemovePodSandbox \"2af231a656d0be3d3cad616fba1142dcb114e5fadf04c519de55c020baf8966f\" returns successfully" Oct 9 07:15:20.164292 containerd[1586]: time="2024-10-09T07:15:20.164264204Z" level=info msg="StopPodSandbox for \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\"" Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.201 [WARNING][5050] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0", GenerateName:"calico-kube-controllers-6bbc98dcd-", Namespace:"calico-system", SelfLink:"", UID:"04d277c1-6044-4d8c-9a67-d2697166170d", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bbc98dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74", Pod:"calico-kube-controllers-6bbc98dcd-jk426", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c936db9554", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.201 [INFO][5050] k8s.go 608: Cleaning up netns ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.201 [INFO][5050] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" iface="eth0" netns="" Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.201 [INFO][5050] k8s.go 615: Releasing IP address(es) ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.201 [INFO][5050] utils.go 188: Calico CNI releasing IP address ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.222 [INFO][5057] ipam_plugin.go 417: Releasing address using handleID ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" HandleID="k8s-pod-network.f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.223 [INFO][5057] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.223 [INFO][5057] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.228 [WARNING][5057] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" HandleID="k8s-pod-network.f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.228 [INFO][5057] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" HandleID="k8s-pod-network.f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.230 [INFO][5057] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:20.235813 containerd[1586]: 2024-10-09 07:15:20.232 [INFO][5050] k8s.go 621: Teardown processing complete. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:20.236274 containerd[1586]: time="2024-10-09T07:15:20.235871244Z" level=info msg="TearDown network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\" successfully" Oct 9 07:15:20.236274 containerd[1586]: time="2024-10-09T07:15:20.235899438Z" level=info msg="StopPodSandbox for \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\" returns successfully" Oct 9 07:15:20.236456 containerd[1586]: time="2024-10-09T07:15:20.236427119Z" level=info msg="RemovePodSandbox for \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\"" Oct 9 07:15:20.236456 containerd[1586]: time="2024-10-09T07:15:20.236471996Z" level=info msg="Forcibly stopping sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\"" Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.273 [WARNING][5080] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0", GenerateName:"calico-kube-controllers-6bbc98dcd-", Namespace:"calico-system", SelfLink:"", UID:"04d277c1-6044-4d8c-9a67-d2697166170d", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bbc98dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8f982612082f0f309feff876d0702da7d3a72bedec0f0a567b099bf85b09a74", Pod:"calico-kube-controllers-6bbc98dcd-jk426", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c936db9554", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.273 [INFO][5080] k8s.go 608: Cleaning up netns ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.273 [INFO][5080] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" iface="eth0" netns="" Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.273 [INFO][5080] k8s.go 615: Releasing IP address(es) ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.273 [INFO][5080] utils.go 188: Calico CNI releasing IP address ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.296 [INFO][5088] ipam_plugin.go 417: Releasing address using handleID ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" HandleID="k8s-pod-network.f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.296 [INFO][5088] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.296 [INFO][5088] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.301 [WARNING][5088] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" HandleID="k8s-pod-network.f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.301 [INFO][5088] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" HandleID="k8s-pod-network.f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Workload="localhost-k8s-calico--kube--controllers--6bbc98dcd--jk426-eth0" Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.302 [INFO][5088] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:20.308253 containerd[1586]: 2024-10-09 07:15:20.305 [INFO][5080] k8s.go 621: Teardown processing complete. ContainerID="f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911" Oct 9 07:15:20.308753 containerd[1586]: time="2024-10-09T07:15:20.308296796Z" level=info msg="TearDown network for sandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\" successfully" Oct 9 07:15:20.312053 containerd[1586]: time="2024-10-09T07:15:20.312022632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:15:20.312097 containerd[1586]: time="2024-10-09T07:15:20.312077839Z" level=info msg="RemovePodSandbox \"f280e2175dd040cc5d39e1999da79b33da2ba7d8c2d3212e4017e84d9c8e5911\" returns successfully" Oct 9 07:15:20.312623 containerd[1586]: time="2024-10-09T07:15:20.312594608Z" level=info msg="StopPodSandbox for \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\"" Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.348 [WARNING][5110] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--ljpqg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6da8c6fb-d852-4cac-a809-b18748e45975", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6", Pod:"coredns-76f75df574-ljpqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72b628c8584", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.348 [INFO][5110] k8s.go 608: Cleaning up netns ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.348 [INFO][5110] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" iface="eth0" netns="" Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.348 [INFO][5110] k8s.go 615: Releasing IP address(es) ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.348 [INFO][5110] utils.go 188: Calico CNI releasing IP address ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.375 [INFO][5118] ipam_plugin.go 417: Releasing address using handleID ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" HandleID="k8s-pod-network.5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.376 [INFO][5118] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.376 [INFO][5118] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.381 [WARNING][5118] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" HandleID="k8s-pod-network.5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.381 [INFO][5118] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" HandleID="k8s-pod-network.5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.382 [INFO][5118] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:20.388857 containerd[1586]: 2024-10-09 07:15:20.385 [INFO][5110] k8s.go 621: Teardown processing complete. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:20.389537 containerd[1586]: time="2024-10-09T07:15:20.388882310Z" level=info msg="TearDown network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\" successfully" Oct 9 07:15:20.389537 containerd[1586]: time="2024-10-09T07:15:20.388906898Z" level=info msg="StopPodSandbox for \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\" returns successfully" Oct 9 07:15:20.389537 containerd[1586]: time="2024-10-09T07:15:20.389511307Z" level=info msg="RemovePodSandbox for \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\"" Oct 9 07:15:20.389639 containerd[1586]: time="2024-10-09T07:15:20.389560812Z" level=info msg="Forcibly stopping sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\"" Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.431 [WARNING][5140] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--ljpqg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6da8c6fb-d852-4cac-a809-b18748e45975", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0572a4f7ae8297ac2c7390c9647fd41801ee63b593c1bed94dec1bd220048f6", Pod:"coredns-76f75df574-ljpqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72b628c8584", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.431 [INFO][5140] k8s.go 608: Cleaning up netns ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.431 [INFO][5140] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" iface="eth0" netns="" Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.431 [INFO][5140] k8s.go 615: Releasing IP address(es) ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.431 [INFO][5140] utils.go 188: Calico CNI releasing IP address ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.456 [INFO][5147] ipam_plugin.go 417: Releasing address using handleID ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" HandleID="k8s-pod-network.5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.456 [INFO][5147] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.456 [INFO][5147] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.462 [WARNING][5147] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" HandleID="k8s-pod-network.5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.463 [INFO][5147] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" HandleID="k8s-pod-network.5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Workload="localhost-k8s-coredns--76f75df574--ljpqg-eth0" Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.465 [INFO][5147] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:20.472057 containerd[1586]: 2024-10-09 07:15:20.468 [INFO][5140] k8s.go 621: Teardown processing complete. ContainerID="5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac" Oct 9 07:15:20.472644 containerd[1586]: time="2024-10-09T07:15:20.472111438Z" level=info msg="TearDown network for sandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\" successfully" Oct 9 07:15:20.476877 containerd[1586]: time="2024-10-09T07:15:20.476809523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:15:20.477116 containerd[1586]: time="2024-10-09T07:15:20.476904808Z" level=info msg="RemovePodSandbox \"5c1f2e1871c283fb6a7972e83b36d988320b9dab5afe66a5371660de77b2b1ac\" returns successfully" Oct 9 07:15:20.905154 systemd[1]: Started sshd@15-10.0.0.30:22-10.0.0.1:54908.service - OpenSSH per-connection server daemon (10.0.0.1:54908). Oct 9 07:15:20.941412 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 54908 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:20.943265 sshd[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:20.947931 systemd-logind[1569]: New session 16 of user core. Oct 9 07:15:20.953312 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:15:21.114642 sshd[5155]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:21.119172 systemd[1]: sshd@15-10.0.0.30:22-10.0.0.1:54908.service: Deactivated successfully. Oct 9 07:15:21.122192 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:15:21.122388 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:15:21.123751 systemd-logind[1569]: Removed session 16. Oct 9 07:15:24.637519 kubelet[2740]: I1009 07:15:24.637267 2740 topology_manager.go:215] "Topology Admit Handler" podUID="780a1c7e-ff7a-4e6f-816a-bfc4b73a0614" podNamespace="calico-apiserver" podName="calico-apiserver-86766596b5-jc4dd" Oct 9 07:15:24.736860 kubelet[2740]: I1009 07:15:24.736758 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5spns\" (UniqueName: \"kubernetes.io/projected/780a1c7e-ff7a-4e6f-816a-bfc4b73a0614-kube-api-access-5spns\") pod \"calico-apiserver-86766596b5-jc4dd\" (UID: \"780a1c7e-ff7a-4e6f-816a-bfc4b73a0614\") " pod="calico-apiserver/calico-apiserver-86766596b5-jc4dd" Oct 9 07:15:24.736860 kubelet[2740]: I1009 07:15:24.736819 2740 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/780a1c7e-ff7a-4e6f-816a-bfc4b73a0614-calico-apiserver-certs\") pod \"calico-apiserver-86766596b5-jc4dd\" (UID: \"780a1c7e-ff7a-4e6f-816a-bfc4b73a0614\") " pod="calico-apiserver/calico-apiserver-86766596b5-jc4dd" Oct 9 07:15:24.837537 kubelet[2740]: E1009 07:15:24.837489 2740 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:15:24.837726 kubelet[2740]: E1009 07:15:24.837573 2740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/780a1c7e-ff7a-4e6f-816a-bfc4b73a0614-calico-apiserver-certs podName:780a1c7e-ff7a-4e6f-816a-bfc4b73a0614 nodeName:}" failed. No retries permitted until 2024-10-09 07:15:25.337553964 +0000 UTC m=+65.592888379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/780a1c7e-ff7a-4e6f-816a-bfc4b73a0614-calico-apiserver-certs") pod "calico-apiserver-86766596b5-jc4dd" (UID: "780a1c7e-ff7a-4e6f-816a-bfc4b73a0614") : secret "calico-apiserver-certs" not found Oct 9 07:15:25.339800 kubelet[2740]: E1009 07:15:25.339756 2740 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:15:25.340003 kubelet[2740]: E1009 07:15:25.339837 2740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/780a1c7e-ff7a-4e6f-816a-bfc4b73a0614-calico-apiserver-certs podName:780a1c7e-ff7a-4e6f-816a-bfc4b73a0614 nodeName:}" failed. No retries permitted until 2024-10-09 07:15:26.33982211 +0000 UTC m=+66.595156525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/780a1c7e-ff7a-4e6f-816a-bfc4b73a0614-calico-apiserver-certs") pod "calico-apiserver-86766596b5-jc4dd" (UID: "780a1c7e-ff7a-4e6f-816a-bfc4b73a0614") : secret "calico-apiserver-certs" not found Oct 9 07:15:26.132226 systemd[1]: Started sshd@16-10.0.0.30:22-10.0.0.1:54920.service - OpenSSH per-connection server daemon (10.0.0.1:54920). Oct 9 07:15:26.165934 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 54920 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:26.167519 sshd[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:26.191550 systemd-logind[1569]: New session 17 of user core. Oct 9 07:15:26.199399 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:15:26.310293 sshd[5208]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:26.318174 systemd[1]: Started sshd@17-10.0.0.30:22-10.0.0.1:54928.service - OpenSSH per-connection server daemon (10.0.0.1:54928). Oct 9 07:15:26.318834 systemd[1]: sshd@16-10.0.0.30:22-10.0.0.1:54920.service: Deactivated successfully. Oct 9 07:15:26.324118 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:15:26.325269 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:15:26.326279 systemd-logind[1569]: Removed session 17. Oct 9 07:15:26.352942 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 54928 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:26.353655 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:26.358132 systemd-logind[1569]: New session 18 of user core. Oct 9 07:15:26.366218 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:15:26.445992 containerd[1586]: time="2024-10-09T07:15:26.445855602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766596b5-jc4dd,Uid:780a1c7e-ff7a-4e6f-816a-bfc4b73a0614,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:15:26.567244 systemd-networkd[1259]: calicc1442aab53: Link UP Oct 9 07:15:26.568172 systemd-networkd[1259]: calicc1442aab53: Gained carrier Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.503 [INFO][5233] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0 calico-apiserver-86766596b5- calico-apiserver 780a1c7e-ff7a-4e6f-816a-bfc4b73a0614 1021 0 2024-10-09 07:15:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86766596b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86766596b5-jc4dd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicc1442aab53 [] []}} ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Namespace="calico-apiserver" Pod="calico-apiserver-86766596b5-jc4dd" WorkloadEndpoint="localhost-k8s-calico--apiserver--86766596b5--jc4dd-" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.503 [INFO][5233] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Namespace="calico-apiserver" Pod="calico-apiserver-86766596b5-jc4dd" WorkloadEndpoint="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.530 [INFO][5247] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" HandleID="k8s-pod-network.b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Workload="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.539 [INFO][5247] ipam_plugin.go 270: Auto assigning IP ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" HandleID="k8s-pod-network.b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Workload="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86766596b5-jc4dd", "timestamp":"2024-10-09 07:15:26.530492992 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.539 [INFO][5247] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.539 [INFO][5247] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.539 [INFO][5247] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.540 [INFO][5247] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" host="localhost" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.543 [INFO][5247] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.547 [INFO][5247] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.548 [INFO][5247] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.551 [INFO][5247] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.551 [INFO][5247] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" host="localhost" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.552 [INFO][5247] ipam.go 1685: Creating new handle: k8s-pod-network.b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91 Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.556 [INFO][5247] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" host="localhost" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.562 [INFO][5247] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" host="localhost" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.562 [INFO][5247] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" host="localhost" Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.562 [INFO][5247] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:15:26.581250 containerd[1586]: 2024-10-09 07:15:26.562 [INFO][5247] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" HandleID="k8s-pod-network.b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Workload="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" Oct 9 07:15:26.581910 containerd[1586]: 2024-10-09 07:15:26.564 [INFO][5233] k8s.go 386: Populated endpoint ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Namespace="calico-apiserver" Pod="calico-apiserver-86766596b5-jc4dd" WorkloadEndpoint="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0", GenerateName:"calico-apiserver-86766596b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"780a1c7e-ff7a-4e6f-816a-bfc4b73a0614", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766596b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86766596b5-jc4dd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc1442aab53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:26.581910 containerd[1586]: 2024-10-09 07:15:26.565 [INFO][5233] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Namespace="calico-apiserver" Pod="calico-apiserver-86766596b5-jc4dd" WorkloadEndpoint="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" Oct 9 07:15:26.581910 containerd[1586]: 2024-10-09 07:15:26.565 [INFO][5233] dataplane_linux.go 68: Setting the host side veth name to calicc1442aab53 ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Namespace="calico-apiserver" Pod="calico-apiserver-86766596b5-jc4dd" WorkloadEndpoint="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" Oct 9 07:15:26.581910 containerd[1586]: 2024-10-09 07:15:26.568 [INFO][5233] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Namespace="calico-apiserver" Pod="calico-apiserver-86766596b5-jc4dd" WorkloadEndpoint="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" Oct 9 07:15:26.581910 containerd[1586]: 2024-10-09 07:15:26.568 [INFO][5233] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Namespace="calico-apiserver" Pod="calico-apiserver-86766596b5-jc4dd" WorkloadEndpoint="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0", GenerateName:"calico-apiserver-86766596b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"780a1c7e-ff7a-4e6f-816a-bfc4b73a0614", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766596b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91", Pod:"calico-apiserver-86766596b5-jc4dd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc1442aab53", MAC:"72:5e:cb:02:2b:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:15:26.581910 containerd[1586]: 2024-10-09 07:15:26.577 [INFO][5233] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91" Namespace="calico-apiserver" Pod="calico-apiserver-86766596b5-jc4dd" WorkloadEndpoint="localhost-k8s-calico--apiserver--86766596b5--jc4dd-eth0" Oct 9 07:15:26.608100 containerd[1586]: time="2024-10-09T07:15:26.607994484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:15:26.609273 containerd[1586]: time="2024-10-09T07:15:26.608068106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:26.609472 containerd[1586]: time="2024-10-09T07:15:26.608898864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:15:26.609472 containerd[1586]: time="2024-10-09T07:15:26.608994999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:15:26.637651 systemd-resolved[1479]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:15:26.670193 containerd[1586]: time="2024-10-09T07:15:26.670149874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766596b5-jc4dd,Uid:780a1c7e-ff7a-4e6f-816a-bfc4b73a0614,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91\"" Oct 9 07:15:26.670575 sshd[5220]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:26.673203 containerd[1586]: time="2024-10-09T07:15:26.673170549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:15:26.678190 systemd[1]: Started sshd@18-10.0.0.30:22-10.0.0.1:54942.service - OpenSSH per-connection server daemon (10.0.0.1:54942). Oct 9 07:15:26.678686 systemd[1]: sshd@17-10.0.0.30:22-10.0.0.1:54928.service: Deactivated successfully. Oct 9 07:15:26.682597 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:15:26.682740 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:15:26.683987 systemd-logind[1569]: Removed session 18. Oct 9 07:15:26.711819 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 54942 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:26.713357 sshd[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:26.717866 systemd-logind[1569]: New session 19 of user core. Oct 9 07:15:26.731160 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:15:28.204127 sshd[5313]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:28.213269 systemd[1]: Started sshd@19-10.0.0.30:22-10.0.0.1:59444.service - OpenSSH per-connection server daemon (10.0.0.1:59444). Oct 9 07:15:28.213798 systemd[1]: sshd@18-10.0.0.30:22-10.0.0.1:54942.service: Deactivated successfully. Oct 9 07:15:28.216301 systemd-networkd[1259]: calicc1442aab53: Gained IPv6LL Oct 9 07:15:28.223063 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:15:28.225098 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:15:28.226875 systemd-logind[1569]: Removed session 19. Oct 9 07:15:28.255307 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 59444 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:28.256957 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:28.261407 systemd-logind[1569]: New session 20 of user core. Oct 9 07:15:28.281181 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:15:28.632411 sshd[5331]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:28.641475 systemd[1]: Started sshd@20-10.0.0.30:22-10.0.0.1:59456.service - OpenSSH per-connection server daemon (10.0.0.1:59456). Oct 9 07:15:28.642161 systemd[1]: sshd@19-10.0.0.30:22-10.0.0.1:59444.service: Deactivated successfully. Oct 9 07:15:28.645229 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:15:28.647731 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:15:28.649581 systemd-logind[1569]: Removed session 20. Oct 9 07:15:28.694222 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 59456 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:28.696032 sshd[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:28.700762 systemd-logind[1569]: New session 21 of user core. Oct 9 07:15:28.714177 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:15:28.821603 sshd[5346]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:28.825801 systemd[1]: sshd@20-10.0.0.30:22-10.0.0.1:59456.service: Deactivated successfully. Oct 9 07:15:28.828564 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:15:28.828613 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:15:28.829576 systemd-logind[1569]: Removed session 21. Oct 9 07:15:30.040101 containerd[1586]: time="2024-10-09T07:15:30.040011458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:30.041450 containerd[1586]: time="2024-10-09T07:15:30.041408931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:15:30.042907 containerd[1586]: time="2024-10-09T07:15:30.042871580Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:30.045483 containerd[1586]: time="2024-10-09T07:15:30.045403591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:15:30.046229 containerd[1586]: time="2024-10-09T07:15:30.046182797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.372957482s" Oct 9 07:15:30.046319 containerd[1586]: time="2024-10-09T07:15:30.046230809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:15:30.048359 containerd[1586]: time="2024-10-09T07:15:30.048316413Z" level=info msg="CreateContainer within sandbox \"b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:15:30.059384 containerd[1586]: time="2024-10-09T07:15:30.059220768Z" level=info msg="CreateContainer within sandbox \"b17589ad1d6c41d10b35bbc0854c3cfe35274b8b5d27537a579553e581d21a91\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec57bece86a4ea176f47d7db0939cfaaa36c35b15e4d26a34939912cb5167444\"" Oct 9 07:15:30.061067 containerd[1586]: time="2024-10-09T07:15:30.059760224Z" level=info msg="StartContainer for \"ec57bece86a4ea176f47d7db0939cfaaa36c35b15e4d26a34939912cb5167444\"" Oct 9 07:15:30.589865 containerd[1586]: time="2024-10-09T07:15:30.589803478Z" level=info msg="StartContainer for \"ec57bece86a4ea176f47d7db0939cfaaa36c35b15e4d26a34939912cb5167444\" returns successfully" Oct 9 07:15:31.120459 kubelet[2740]: I1009 07:15:31.120393 2740 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86766596b5-jc4dd" podStartSLOduration=3.745901308 podStartE2EDuration="7.120349428s" podCreationTimestamp="2024-10-09 07:15:24 +0000 UTC" firstStartedPulling="2024-10-09 07:15:26.672133232 +0000 UTC m=+66.927467657" lastFinishedPulling="2024-10-09 07:15:30.046581362 +0000 UTC m=+70.301915777" observedRunningTime="2024-10-09 07:15:31.109507986 +0000 UTC m=+71.364842401" watchObservedRunningTime="2024-10-09 07:15:31.120349428 +0000 UTC m=+71.375683843" Oct 9 07:15:33.839167 systemd[1]: Started sshd@21-10.0.0.30:22-10.0.0.1:59462.service - OpenSSH per-connection server daemon (10.0.0.1:59462). Oct 9 07:15:33.872568 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 59462 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:33.874151 sshd[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:33.878516 systemd-logind[1569]: New session 22 of user core. Oct 9 07:15:33.886172 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:15:33.999529 sshd[5424]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:34.004290 systemd[1]: sshd@21-10.0.0.30:22-10.0.0.1:59462.service: Deactivated successfully. Oct 9 07:15:34.007108 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:15:34.007805 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:15:34.008861 systemd-logind[1569]: Removed session 22. Oct 9 07:15:35.839165 kubelet[2740]: E1009 07:15:35.839125 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:39.020157 systemd[1]: Started sshd@22-10.0.0.30:22-10.0.0.1:35212.service - OpenSSH per-connection server daemon (10.0.0.1:35212). Oct 9 07:15:39.051549 sshd[5446]: Accepted publickey for core from 10.0.0.1 port 35212 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:39.053350 sshd[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:39.057333 systemd-logind[1569]: New session 23 of user core. Oct 9 07:15:39.064175 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:15:39.174246 sshd[5446]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:39.179013 systemd[1]: sshd@22-10.0.0.30:22-10.0.0.1:35212.service: Deactivated successfully. Oct 9 07:15:39.182049 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:15:39.182840 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:15:39.183843 systemd-logind[1569]: Removed session 23. Oct 9 07:15:39.839687 kubelet[2740]: E1009 07:15:39.839641 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:15:44.183244 systemd[1]: Started sshd@23-10.0.0.30:22-10.0.0.1:35228.service - OpenSSH per-connection server daemon (10.0.0.1:35228). Oct 9 07:15:44.216230 sshd[5473]: Accepted publickey for core from 10.0.0.1 port 35228 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:44.217789 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:44.222183 systemd-logind[1569]: New session 24 of user core. Oct 9 07:15:44.230179 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:15:44.345948 sshd[5473]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:44.350075 systemd[1]: sshd@23-10.0.0.30:22-10.0.0.1:35228.service: Deactivated successfully. Oct 9 07:15:44.352772 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:15:44.352849 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:15:44.354107 systemd-logind[1569]: Removed session 24. Oct 9 07:15:49.358245 systemd[1]: Started sshd@24-10.0.0.30:22-10.0.0.1:57680.service - OpenSSH per-connection server daemon (10.0.0.1:57680). Oct 9 07:15:49.392326 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 57680 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:15:49.394930 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:15:49.400362 systemd-logind[1569]: New session 25 of user core. Oct 9 07:15:49.407203 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:15:49.532951 sshd[5513]: pam_unix(sshd:session): session closed for user core Oct 9 07:15:49.537167 systemd[1]: sshd@24-10.0.0.30:22-10.0.0.1:57680.service: Deactivated successfully. Oct 9 07:15:49.539959 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:15:49.540044 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:15:49.541341 systemd-logind[1569]: Removed session 25.