Jul 2 00:23:23.974616 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:23:23.974651 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:23:23.974668 kernel: BIOS-provided physical RAM map: Jul 2 00:23:23.974682 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 00:23:23.974702 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 00:23:23.974711 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 00:23:23.974726 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 00:23:23.974750 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 00:23:23.974760 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 00:23:23.974770 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 00:23:23.974788 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 00:23:23.974811 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 2 00:23:23.974820 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 2 00:23:23.974829 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 2 00:23:23.974848 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 00:23:23.974868 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 00:23:23.974879 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 00:23:23.974889 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 00:23:23.974899 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 00:23:23.974909 kernel: NX (Execute Disable) protection: active Jul 2 00:23:23.974922 kernel: APIC: Static calls initialized Jul 2 00:23:23.974956 kernel: efi: EFI v2.7 by EDK II Jul 2 00:23:23.974996 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4f9018 Jul 2 00:23:23.975034 kernel: SMBIOS 2.8 present. Jul 2 00:23:23.975085 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 00:23:23.975158 kernel: Hypervisor detected: KVM Jul 2 00:23:23.975176 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:23:23.975214 kernel: kvm-clock: using sched offset of 5436419958 cycles Jul 2 00:23:23.975240 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:23:23.975261 kernel: tsc: Detected 2794.746 MHz processor Jul 2 00:23:23.975272 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:23:23.975292 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:23:23.975304 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 00:23:23.975315 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 2 00:23:23.975330 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:23:23.975343 kernel: Using GB pages for direct mapping Jul 2 00:23:23.975362 kernel: Secure boot disabled Jul 2 00:23:23.975373 kernel: ACPI: Early table checksum verification disabled Jul 2 00:23:23.975383 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 00:23:23.975399 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:23:23.975416 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:23.975427 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:23.975441 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 00:23:23.975453 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:23.975464 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:23.975475 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:23.975486 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 00:23:23.975496 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 00:23:23.975507 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 00:23:23.975518 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 00:23:23.975533 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 00:23:23.975544 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 00:23:23.975555 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 00:23:23.975566 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 00:23:23.975576 kernel: No NUMA configuration found Jul 2 00:23:23.975587 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 00:23:23.975598 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 00:23:23.975609 kernel: Zone ranges: Jul 2 00:23:23.975620 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:23:23.975635 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 00:23:23.975646 kernel: Normal empty Jul 2 00:23:23.975657 kernel: Movable zone start for each node Jul 2 00:23:23.975667 kernel: Early memory node ranges Jul 2 00:23:23.975678 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 00:23:23.975689 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 00:23:23.975700 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 00:23:23.975717 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 00:23:23.975728 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 00:23:23.975749 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 00:23:23.975765 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 00:23:23.975776 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:23:23.975799 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 00:23:23.975820 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 00:23:23.975831 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:23:23.975861 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 00:23:23.975877 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 00:23:23.975897 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 00:23:23.975908 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 00:23:23.975925 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:23:23.975936 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:23:23.975948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:23:23.975959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:23:23.975969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:23:23.975981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:23:23.975992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:23:23.976003 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:23:23.976027 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:23:23.976059 kernel: TSC deadline timer available Jul 2 00:23:23.976086 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 00:23:23.976128 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:23:23.976157 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 00:23:23.976181 kernel: kvm-guest: setup PV sched yield Jul 2 00:23:23.976205 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 00:23:23.976225 kernel: Booting paravirtualized kernel on KVM Jul 2 00:23:23.976251 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:23:23.976276 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 00:23:23.976294 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jul 2 00:23:23.976317 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jul 2 00:23:23.976342 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 00:23:23.976368 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:23:23.976385 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:23:23.976402 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:23:23.976413 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:23:23.976424 kernel: random: crng init done Jul 2 00:23:23.976441 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:23:23.976453 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:23:23.976464 kernel: Fallback order for Node 0: 0 Jul 2 00:23:23.976474 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 00:23:23.976484 kernel: Policy zone: DMA32 Jul 2 00:23:23.976494 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:23:23.976515 kernel: Memory: 2388204K/2567000K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 178536K reserved, 0K cma-reserved) Jul 2 00:23:23.976530 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:23:23.976542 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:23:23.976562 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:23:23.976572 kernel: Dynamic Preempt: voluntary Jul 2 00:23:23.976583 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:23:23.976594 kernel: rcu: RCU event tracing is enabled. Jul 2 00:23:23.976606 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:23:23.976632 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:23:23.976643 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:23:23.976654 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:23:23.976665 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:23:23.976677 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:23:23.976688 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 00:23:23.976700 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:23:23.976715 kernel: Console: colour dummy device 80x25 Jul 2 00:23:23.976726 kernel: printk: console [ttyS0] enabled Jul 2 00:23:23.976748 kernel: ACPI: Core revision 20230628 Jul 2 00:23:23.976760 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:23:23.976771 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:23:23.976791 kernel: x2apic enabled Jul 2 00:23:23.976803 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:23:23.976828 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 2 00:23:23.976850 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 2 00:23:23.976864 kernel: kvm-guest: setup PV IPIs Jul 2 00:23:23.976875 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:23:23.976887 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:23:23.976898 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 2 00:23:23.976909 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 00:23:23.976926 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 00:23:23.976937 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 00:23:23.976948 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:23:23.976960 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:23:23.976971 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:23:23.976982 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:23:23.976993 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 00:23:23.977004 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 00:23:23.977014 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:23:23.977029 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:23:23.977041 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 2 00:23:23.977053 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 2 00:23:23.977072 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 2 00:23:23.977090 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:23:23.977102 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:23:23.977131 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:23:23.977143 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:23:23.977159 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 00:23:23.977170 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:23:23.977182 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:23:23.977193 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:23:23.977209 kernel: SELinux: Initializing. Jul 2 00:23:23.977221 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:23:23.977242 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:23:23.977255 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 00:23:23.977266 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:23.977282 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:23.977292 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:23.977302 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 00:23:23.977314 kernel: ... version: 0 Jul 2 00:23:23.977325 kernel: ... bit width: 48 Jul 2 00:23:23.977336 kernel: ... generic registers: 6 Jul 2 00:23:23.977347 kernel: ... value mask: 0000ffffffffffff Jul 2 00:23:23.977359 kernel: ... max period: 00007fffffffffff Jul 2 00:23:23.977371 kernel: ... fixed-purpose events: 0 Jul 2 00:23:23.977407 kernel: ... event mask: 000000000000003f Jul 2 00:23:23.977419 kernel: signal: max sigframe size: 1776 Jul 2 00:23:23.977435 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:23:23.977463 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:23:23.977482 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:23:23.977493 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:23:23.977505 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 00:23:23.977519 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:23:23.977531 kernel: smpboot: Max logical packages: 1 Jul 2 00:23:23.977548 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 2 00:23:23.977560 kernel: devtmpfs: initialized Jul 2 00:23:23.977572 kernel: x86/mm: Memory block size: 128MB Jul 2 00:23:23.977583 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 00:23:23.977595 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 00:23:23.977612 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 00:23:23.977624 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 00:23:23.977635 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 00:23:23.977647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:23:23.977664 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:23:23.977675 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:23:23.977686 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:23:23.977698 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:23:23.977710 kernel: audit: type=2000 audit(1719879803.056:1): state=initialized audit_enabled=0 res=1 Jul 2 00:23:23.977721 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:23:23.977742 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:23:23.977754 kernel: cpuidle: using governor menu Jul 2 00:23:23.977765 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:23:23.977782 kernel: dca service started, version 1.12.1 Jul 2 00:23:23.977793 kernel: PCI: Using configuration type 1 for base access Jul 2 00:23:23.977805 kernel: PCI: Using configuration type 1 for extended access Jul 2 00:23:23.977817 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:23:23.977829 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:23:23.977844 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:23:23.977856 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:23:23.977867 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:23:23.977879 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:23:23.977895 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:23:23.977907 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:23:23.977918 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:23:23.977930 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:23:23.977941 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:23:23.977953 kernel: ACPI: Interpreter enabled Jul 2 00:23:23.977964 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:23:23.977975 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:23:23.977987 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:23:23.978002 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:23:23.978013 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:23:23.978025 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:23:23.978418 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:23:23.978459 kernel: acpiphp: Slot [3] registered Jul 2 00:23:23.978472 kernel: acpiphp: Slot [4] registered Jul 2 00:23:23.978483 kernel: acpiphp: Slot [5] registered Jul 2 00:23:23.978495 kernel: acpiphp: Slot [6] registered Jul 2 00:23:23.978518 kernel: acpiphp: Slot [7] registered Jul 2 00:23:23.978539 kernel: acpiphp: Slot [8] registered Jul 2 00:23:23.978550 kernel: acpiphp: Slot [9] registered Jul 2 00:23:23.978562 kernel: acpiphp: Slot [10] registered Jul 2 00:23:23.978573 kernel: acpiphp: Slot [11] registered Jul 2 00:23:23.978585 kernel: acpiphp: Slot [12] registered Jul 2 00:23:23.978597 kernel: acpiphp: Slot [13] registered Jul 2 00:23:23.978608 kernel: acpiphp: Slot [14] registered Jul 2 00:23:23.978619 kernel: acpiphp: Slot [15] registered Jul 2 00:23:23.978638 kernel: acpiphp: Slot [16] registered Jul 2 00:23:23.978655 kernel: acpiphp: Slot [17] registered Jul 2 00:23:23.978666 kernel: acpiphp: Slot [18] registered Jul 2 00:23:23.978678 kernel: acpiphp: Slot [19] registered Jul 2 00:23:23.978689 kernel: acpiphp: Slot [20] registered Jul 2 00:23:23.978700 kernel: acpiphp: Slot [21] registered Jul 2 00:23:23.978712 kernel: acpiphp: Slot [22] registered Jul 2 00:23:23.978723 kernel: acpiphp: Slot [23] registered Jul 2 00:23:23.978745 kernel: acpiphp: Slot [24] registered Jul 2 00:23:23.978756 kernel: acpiphp: Slot [25] registered Jul 2 00:23:23.978789 kernel: acpiphp: Slot [26] registered Jul 2 00:23:23.978802 kernel: acpiphp: Slot [27] registered Jul 2 00:23:23.978825 kernel: acpiphp: Slot [28] registered Jul 2 00:23:23.978838 kernel: acpiphp: Slot [29] registered Jul 2 00:23:23.978850 kernel: acpiphp: Slot [30] registered Jul 2 00:23:23.978861 kernel: acpiphp: Slot [31] registered Jul 2 00:23:23.978872 kernel: PCI host bridge to bus 0000:00 Jul 2 00:23:23.979077 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:23:23.979264 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:23:23.979449 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:23:23.979649 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 00:23:23.979826 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 00:23:23.979987 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:23:23.980232 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:23:23.980437 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:23:23.980643 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:23:23.980834 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 00:23:23.981013 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:23:23.981208 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:23:23.981385 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:23:23.981582 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:23:23.981841 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:23:23.982030 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 00:23:23.982250 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 00:23:23.982452 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 00:23:23.982631 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 00:23:23.982838 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 00:23:23.983019 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 00:23:23.983238 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 00:23:23.983416 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:23:23.983611 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:23:23.983765 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 00:23:23.983914 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 00:23:23.984075 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 00:23:23.984303 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:23:23.984467 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:23:23.984625 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 00:23:23.984798 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 00:23:23.984974 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:23:23.985157 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 00:23:23.985317 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 00:23:23.985473 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 00:23:23.985631 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 00:23:23.985647 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:23:23.985658 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:23:23.985670 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:23:23.985681 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:23:23.985692 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:23:23.985703 kernel: iommu: Default domain type: Translated Jul 2 00:23:23.985714 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:23:23.985725 kernel: efivars: Registered efivars operations Jul 2 00:23:23.985751 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:23:23.985761 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:23:23.985772 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 00:23:23.985783 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 00:23:23.985793 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 00:23:23.985804 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 00:23:23.985982 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:23:23.986167 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:23:23.986326 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:23:23.986348 kernel: vgaarb: loaded Jul 2 00:23:23.986359 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:23:23.986370 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:23:23.986381 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:23:23.986391 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:23:23.986403 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:23:23.986414 kernel: pnp: PnP ACPI init Jul 2 00:23:23.986600 kernel: pnp 00:02: [dma 2] Jul 2 00:23:23.986622 kernel: pnp: PnP ACPI: found 6 devices Jul 2 00:23:23.986634 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:23:23.986645 kernel: NET: Registered PF_INET protocol family Jul 2 00:23:23.986656 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:23:23.986667 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:23:23.986678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:23:23.986689 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:23:23.986700 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:23:23.986711 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:23:23.986725 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:23:23.986748 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:23:23.986759 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:23:23.986770 kernel: NET: Registered PF_XDP protocol family Jul 2 00:23:23.986929 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 00:23:23.987086 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 00:23:23.987372 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:23:23.987520 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:23:23.987672 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:23:23.987827 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 00:23:23.987998 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 00:23:23.988177 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:23:23.988380 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:23:23.988397 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:23:23.988408 kernel: Initialise system trusted keyrings Jul 2 00:23:23.988419 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:23:23.988436 kernel: Key type asymmetric registered Jul 2 00:23:23.988446 kernel: Asymmetric key parser 'x509' registered Jul 2 00:23:23.988457 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:23:23.988468 kernel: io scheduler mq-deadline registered Jul 2 00:23:23.988480 kernel: io scheduler kyber registered Jul 2 00:23:23.988491 kernel: io scheduler bfq registered Jul 2 00:23:23.988502 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:23:23.988513 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:23:23.988524 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 00:23:23.988538 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:23:23.988549 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:23:23.988559 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:23:23.988570 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:23:23.988601 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:23:23.988615 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:23:23.988810 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 00:23:23.988959 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 00:23:23.988981 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:23:23.989204 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T00:23:23 UTC (1719879803) Jul 2 00:23:23.989353 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 00:23:23.989369 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:23:23.989381 kernel: efifb: probing for efifb Jul 2 00:23:23.989393 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 2 00:23:23.989404 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 2 00:23:23.989415 kernel: efifb: scrolling: redraw Jul 2 00:23:23.989432 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 2 00:23:23.989443 kernel: Console: switching to colour frame buffer device 100x37 Jul 2 00:23:23.989454 kernel: fb0: EFI VGA frame buffer device Jul 2 00:23:23.989465 kernel: pstore: Using crash dump compression: deflate Jul 2 00:23:23.989477 kernel: pstore: Registered efi_pstore as persistent store backend Jul 2 00:23:23.989488 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:23:23.989499 kernel: Segment Routing with IPv6 Jul 2 00:23:23.989510 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:23:23.989522 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:23:23.989534 kernel: Key type dns_resolver registered Jul 2 00:23:23.989549 kernel: IPI shorthand broadcast: enabled Jul 2 00:23:23.989560 kernel: sched_clock: Marking stable (1172003386, 134983094)->(1350260425, -43273945) Jul 2 00:23:23.989575 kernel: registered taskstats version 1 Jul 2 00:23:23.989586 kernel: Loading compiled-in X.509 certificates Jul 2 00:23:23.989598 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:23:23.989613 kernel: Key type .fscrypt registered Jul 2 00:23:23.989625 kernel: Key type fscrypt-provisioning registered Jul 2 00:23:23.989636 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:23:23.989647 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:23:23.989659 kernel: ima: No architecture policies found Jul 2 00:23:23.989670 kernel: clk: Disabling unused clocks Jul 2 00:23:23.989681 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:23:23.989693 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:23:23.989704 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:23:23.989719 kernel: Run /init as init process Jul 2 00:23:23.989746 kernel: with arguments: Jul 2 00:23:23.989757 kernel: /init Jul 2 00:23:23.989769 kernel: with environment: Jul 2 00:23:23.989781 kernel: HOME=/ Jul 2 00:23:23.989792 kernel: TERM=linux Jul 2 00:23:23.989803 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:23:23.989820 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:23.989839 systemd[1]: Detected virtualization kvm. Jul 2 00:23:23.989851 systemd[1]: Detected architecture x86-64. Jul 2 00:23:23.989863 systemd[1]: Running in initrd. Jul 2 00:23:23.989876 systemd[1]: No hostname configured, using default hostname. Jul 2 00:23:23.989887 systemd[1]: Hostname set to . Jul 2 00:23:23.989899 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:23:23.989910 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:23:23.989922 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:23.989936 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:23.989949 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:23:23.989961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:23.989973 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:23:23.989985 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:23:23.989998 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:23:23.990013 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:23:23.990025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:23.990037 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:23.990048 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:23.990060 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:23.990071 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:23.990083 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:23.990095 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:23.990107 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:23.990140 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:23:23.990152 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:23:23.990164 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:23.990175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:23.990187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:23.990199 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:23.990210 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:23:23.990223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:23.990239 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:23:23.990251 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:23:23.990263 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:23.990276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:23.990288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:23.990300 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:23.990312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:23.990324 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:23:23.990371 systemd-journald[190]: Collecting audit messages is disabled. Jul 2 00:23:23.990413 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:23:23.990425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:23.990438 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:23.990450 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:23:23.990463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:23.990475 systemd-journald[190]: Journal started Jul 2 00:23:23.990503 systemd-journald[190]: Runtime Journal (/run/log/journal/fdc69c709e3a4b1f8eab47f876bedb0c) is 6.0M, max 48.3M, 42.3M free. Jul 2 00:23:23.976894 systemd-modules-load[191]: Inserted module 'overlay' Jul 2 00:23:23.994241 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:24.004163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:24.006831 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:24.011303 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:23:24.015145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:24.020204 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:23:24.024160 kernel: Bridge firewalling registered Jul 2 00:23:24.024021 systemd-modules-load[191]: Inserted module 'br_netfilter' Jul 2 00:23:24.026025 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:24.030278 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:24.034683 dracut-cmdline[221]: dracut-dracut-053 Jul 2 00:23:24.038291 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:23:24.045409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:24.046211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:24.058585 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:24.092700 systemd-resolved[247]: Positive Trust Anchors: Jul 2 00:23:24.092737 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:24.092775 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:24.096016 systemd-resolved[247]: Defaulting to hostname 'linux'. Jul 2 00:23:24.097485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:24.102904 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:24.166146 kernel: SCSI subsystem initialized Jul 2 00:23:24.177139 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:23:24.190136 kernel: iscsi: registered transport (tcp) Jul 2 00:23:24.218164 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:23:24.218236 kernel: QLogic iSCSI HBA Driver Jul 2 00:23:24.278772 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:24.291435 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:23:24.326635 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:23:24.326712 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:23:24.327898 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:23:24.381174 kernel: raid6: avx2x4 gen() 28318 MB/s Jul 2 00:23:24.398163 kernel: raid6: avx2x2 gen() 28775 MB/s Jul 2 00:23:24.415576 kernel: raid6: avx2x1 gen() 20333 MB/s Jul 2 00:23:24.415677 kernel: raid6: using algorithm avx2x2 gen() 28775 MB/s Jul 2 00:23:24.433400 kernel: raid6: .... xor() 19512 MB/s, rmw enabled Jul 2 00:23:24.433495 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:23:24.461157 kernel: xor: automatically using best checksumming function avx Jul 2 00:23:24.656147 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:23:24.671412 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:24.680508 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:24.698671 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 2 00:23:24.704466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:24.710366 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:23:24.729523 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jul 2 00:23:24.769299 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:24.791422 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:24.870538 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:24.882368 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:23:24.905131 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:24.909012 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:24.912675 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:24.915791 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:24.918810 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:23:24.927138 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 00:23:24.952415 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:23:24.952624 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:23:24.952638 kernel: GPT:9289727 != 19775487 Jul 2 00:23:24.952657 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:23:24.952668 kernel: GPT:9289727 != 19775487 Jul 2 00:23:24.952678 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:23:24.952688 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:23:24.931351 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:23:24.945624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:24.945824 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:24.958808 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:23:24.948099 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:24.961222 kernel: AES CTR mode by8 optimization enabled Jul 2 00:23:24.951394 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:24.951609 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:24.953279 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:24.963677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:24.964933 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:24.978914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:24.979042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:24.985494 kernel: libata version 3.00 loaded. Jul 2 00:23:24.987165 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:23:25.005544 kernel: scsi host0: ata_piix Jul 2 00:23:25.005765 kernel: scsi host1: ata_piix Jul 2 00:23:25.005938 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (465) Jul 2 00:23:25.005951 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 00:23:25.005970 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 00:23:24.999265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:25.011169 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (473) Jul 2 00:23:25.018125 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:23:25.020924 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:25.031163 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:23:25.046866 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:23:25.055605 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:23:25.064090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:23:25.094284 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:23:25.095479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:25.125652 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:25.159291 kernel: ata2: found unknown device (class 0) Jul 2 00:23:25.159339 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 00:23:25.162194 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 00:23:25.217150 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 00:23:25.235082 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:23:25.235140 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 00:23:25.297004 disk-uuid[546]: Primary Header is updated. Jul 2 00:23:25.297004 disk-uuid[546]: Secondary Entries is updated. Jul 2 00:23:25.297004 disk-uuid[546]: Secondary Header is updated. Jul 2 00:23:25.301218 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:23:26.315143 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:23:26.315802 disk-uuid[568]: The operation has completed successfully. Jul 2 00:23:26.345362 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:23:26.345488 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:23:26.382281 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:23:26.405083 sh[585]: Success Jul 2 00:23:26.453214 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 00:23:26.492437 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:23:26.519088 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:23:26.536940 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:23:26.554528 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:23:26.554574 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:26.554586 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:23:26.555782 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:23:26.556665 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:23:26.562496 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:23:26.565184 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:23:26.576317 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:23:26.579389 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:23:26.588360 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:26.588417 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:26.588429 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:23:26.592135 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:23:26.603132 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:23:26.628285 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:26.689588 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:26.701463 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:26.762933 systemd-networkd[763]: lo: Link UP Jul 2 00:23:26.762945 systemd-networkd[763]: lo: Gained carrier Jul 2 00:23:26.764731 systemd-networkd[763]: Enumeration completed Jul 2 00:23:26.765154 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:26.765159 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:26.765248 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:26.766148 systemd-networkd[763]: eth0: Link UP Jul 2 00:23:26.766152 systemd-networkd[763]: eth0: Gained carrier Jul 2 00:23:26.766160 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:26.767508 systemd[1]: Reached target network.target - Network. Jul 2 00:23:26.799247 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:23:26.866654 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:23:26.871622 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:23:26.935013 ignition[768]: Ignition 2.18.0 Jul 2 00:23:26.935025 ignition[768]: Stage: fetch-offline Jul 2 00:23:26.935072 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:26.935084 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:23:26.935318 ignition[768]: parsed url from cmdline: "" Jul 2 00:23:26.935325 ignition[768]: no config URL provided Jul 2 00:23:26.935332 ignition[768]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:23:26.935342 ignition[768]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:23:26.935373 ignition[768]: op(1): [started] loading QEMU firmware config module Jul 2 00:23:26.935378 ignition[768]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:23:26.960355 ignition[768]: op(1): [finished] loading QEMU firmware config module Jul 2 00:23:27.002526 ignition[768]: parsing config with SHA512: 3da000b1c2295c6cf31d6bee8ddc51c1d52e8753cb585fb62b0cb350c24adafbf9c31c336772e767a06e13fee71f835db8b164110e3321e24395c9dd2dbf062f Jul 2 00:23:27.007489 unknown[768]: fetched base config from "system" Jul 2 00:23:27.007504 unknown[768]: fetched user config from "qemu" Jul 2 00:23:27.022147 ignition[768]: fetch-offline: fetch-offline passed Jul 2 00:23:27.022327 ignition[768]: Ignition finished successfully Jul 2 00:23:27.025723 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:27.053853 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:23:27.066530 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:23:27.086395 ignition[780]: Ignition 2.18.0 Jul 2 00:23:27.086422 ignition[780]: Stage: kargs Jul 2 00:23:27.086695 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:27.086713 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:23:27.087880 ignition[780]: kargs: kargs passed Jul 2 00:23:27.087937 ignition[780]: Ignition finished successfully Jul 2 00:23:27.095916 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:23:27.114502 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:23:27.131550 ignition[789]: Ignition 2.18.0 Jul 2 00:23:27.131562 ignition[789]: Stage: disks Jul 2 00:23:27.131751 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:27.131764 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:23:27.132650 ignition[789]: disks: disks passed Jul 2 00:23:27.132710 ignition[789]: Ignition finished successfully Jul 2 00:23:27.150673 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:23:27.151271 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:27.151630 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:23:27.152027 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:27.152634 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:27.152993 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:27.163811 systemd-resolved[247]: Detected conflict on linux IN A 10.0.0.122 Jul 2 00:23:27.163827 systemd-resolved[247]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jul 2 00:23:27.168072 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:23:27.200863 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:23:27.463094 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:23:27.474418 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:23:27.613179 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:23:27.614324 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:23:27.615346 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:27.629271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:27.631426 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:23:27.632359 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:23:27.632404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:23:27.641167 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Jul 2 00:23:27.641195 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:27.632432 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:27.662839 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:27.662868 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:23:27.662879 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:23:27.664992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:27.670090 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:23:27.683270 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:23:27.722254 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:23:27.728100 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:23:27.732661 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:23:27.737359 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:23:27.833858 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:27.850246 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:23:27.852399 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:23:27.860036 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:23:27.861516 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:27.890175 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:23:27.917968 ignition[927]: INFO : Ignition 2.18.0 Jul 2 00:23:27.917968 ignition[927]: INFO : Stage: mount Jul 2 00:23:27.923146 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:27.923146 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:23:27.923146 ignition[927]: INFO : mount: mount passed Jul 2 00:23:27.923146 ignition[927]: INFO : Ignition finished successfully Jul 2 00:23:27.928812 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:23:27.941210 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:23:28.029478 systemd-networkd[763]: eth0: Gained IPv6LL Jul 2 00:23:28.628454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:28.638197 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Jul 2 00:23:28.640782 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:28.640844 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:28.640861 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:23:28.645136 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:23:28.646700 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:28.685062 ignition[955]: INFO : Ignition 2.18.0 Jul 2 00:23:28.685062 ignition[955]: INFO : Stage: files Jul 2 00:23:28.687297 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:28.687297 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:23:28.687297 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:23:28.691323 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:23:28.691323 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:23:28.691323 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:23:28.691323 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:23:28.698250 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:23:28.698250 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:23:28.698250 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:23:28.691773 unknown[955]: wrote ssh authorized keys file for user: core Jul 2 00:23:28.783658 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:23:28.940902 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:23:28.943558 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 00:23:29.461035 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:23:29.879006 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:23:29.879006 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:23:29.901226 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:29.901226 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:29.901226 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:23:29.901226 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 2 00:23:29.901226 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:23:29.901226 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:23:29.901226 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 2 00:23:29.901226 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:23:29.928995 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:23:29.937476 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:23:29.966011 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:23:29.966011 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:29.966011 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:29.966011 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:29.966011 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:29.966011 ignition[955]: INFO : files: files passed Jul 2 00:23:29.966011 ignition[955]: INFO : Ignition finished successfully Jul 2 00:23:29.942386 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:23:29.978467 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:23:29.980925 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:23:29.983182 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:23:29.983334 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:23:29.993237 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:23:29.996098 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:29.996098 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:30.008280 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:30.012357 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:30.013151 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:23:30.032354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:23:30.077945 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:23:30.078133 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:23:30.079057 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:23:30.082409 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:23:30.082808 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:23:30.083783 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:23:30.110856 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:30.123526 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:23:30.139287 systemd[1]: Stopped target network.target - Network. Jul 2 00:23:30.139974 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:30.141903 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:30.142545 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:23:30.142894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:23:30.143058 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:30.150925 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:23:30.151752 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:23:30.152135 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:23:30.153163 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:30.154002 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:30.154616 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:23:30.155025 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:30.155451 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:23:30.155858 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:23:30.156413 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:23:30.156718 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:23:30.156908 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:30.177691 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:30.180079 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:30.181869 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:23:30.184241 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:30.184962 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:23:30.185173 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:30.190403 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:23:30.190581 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:30.191182 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:23:30.191630 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:23:30.195825 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:30.198639 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:23:30.200552 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:23:30.202879 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:23:30.203028 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:30.205952 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:23:30.206273 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:30.207568 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:23:30.207779 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:30.209937 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:23:30.210100 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:23:30.225322 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:23:30.226045 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:23:30.226265 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:30.228665 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:23:30.234243 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:23:30.234966 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:23:30.236581 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:23:30.236845 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:30.240276 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:23:30.240445 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:30.244053 systemd-networkd[763]: eth0: DHCPv6 lease lost Jul 2 00:23:30.245494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:23:30.245713 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:23:30.251487 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:23:30.251699 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:23:30.254051 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:23:30.255422 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:23:30.257791 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:23:30.257896 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:30.260689 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:23:30.263814 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:23:30.263915 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:30.264443 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:23:30.264511 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:30.268128 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:23:30.268192 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:30.268685 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:23:30.268751 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:30.270333 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:23:30.282233 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:23:30.282386 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:23:30.304453 ignition[1009]: INFO : Ignition 2.18.0 Jul 2 00:23:30.304453 ignition[1009]: INFO : Stage: umount Jul 2 00:23:30.306542 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:30.306542 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:23:30.309581 ignition[1009]: INFO : umount: umount passed Jul 2 00:23:30.311664 ignition[1009]: INFO : Ignition finished successfully Jul 2 00:23:30.314908 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:23:30.315148 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:23:30.316687 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:23:30.316827 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:23:30.319083 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:23:30.319209 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:23:30.321551 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:23:30.321672 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:23:30.323514 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:23:30.323575 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:30.324233 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:30.343548 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:23:30.343820 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:30.346652 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:23:30.346707 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:30.349352 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:23:30.349401 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:30.351835 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:23:30.351903 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:30.354882 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:23:30.354982 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:30.357028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:30.357083 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:30.414447 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:23:30.415878 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:23:30.416007 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:30.418712 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:30.418784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:30.424676 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:23:30.424807 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:23:30.568187 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:23:30.570270 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:23:30.573394 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:23:30.598725 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:23:30.599784 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:30.616388 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:23:30.649939 systemd[1]: Switching root. Jul 2 00:23:30.684266 systemd-journald[190]: Journal stopped Jul 2 00:23:33.633706 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Jul 2 00:23:33.633787 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:23:33.633801 kernel: SELinux: policy capability open_perms=1 Jul 2 00:23:33.633813 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:23:33.633825 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:23:33.633837 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:23:33.633849 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:23:33.633865 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:23:33.633877 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:23:33.633894 kernel: audit: type=1403 audit(1719879812.243:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:23:33.633913 systemd[1]: Successfully loaded SELinux policy in 118.972ms. Jul 2 00:23:33.633939 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.016ms. Jul 2 00:23:33.633953 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:33.633966 systemd[1]: Detected virtualization kvm. Jul 2 00:23:33.633978 systemd[1]: Detected architecture x86-64. Jul 2 00:23:33.633990 systemd[1]: Detected first boot. Jul 2 00:23:33.634005 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:23:33.634018 zram_generator::config[1054]: No configuration found. Jul 2 00:23:33.634032 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:23:33.634044 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:23:33.634057 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:23:33.634072 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:33.634085 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:23:33.634097 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:23:33.634131 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:23:33.634144 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:23:33.634157 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:23:33.634169 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:23:33.634182 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:23:33.634195 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:23:33.634208 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:33.634221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:33.634234 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:23:33.634251 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:23:33.634264 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:23:33.634276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:33.634289 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:23:33.634301 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:33.634314 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:23:33.634326 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:23:33.634339 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:33.634354 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:23:33.634373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:33.634389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:33.634402 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:33.634414 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:33.634427 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:23:33.634440 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:23:33.634452 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:33.634468 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:33.634481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:33.634493 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:23:33.634513 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:23:33.634526 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:23:33.634538 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:23:33.634551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:33.634563 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:23:33.634576 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:23:33.634591 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:23:33.634605 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:23:33.634617 systemd[1]: Reached target machines.target - Containers. Jul 2 00:23:33.634629 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:23:33.634642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:33.634656 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:33.634668 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:23:33.634681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:33.634696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:33.634708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:33.634720 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:23:33.634733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:33.634746 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:23:33.634758 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:23:33.634771 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:23:33.634783 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:23:33.634795 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:23:33.634810 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:33.634823 kernel: loop: module loaded Jul 2 00:23:33.634834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:33.634846 kernel: fuse: init (API version 7.39) Jul 2 00:23:33.634858 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:23:33.634893 systemd-journald[1116]: Collecting audit messages is disabled. Jul 2 00:23:33.634915 systemd-journald[1116]: Journal started Jul 2 00:23:33.634940 systemd-journald[1116]: Runtime Journal (/run/log/journal/fdc69c709e3a4b1f8eab47f876bedb0c) is 6.0M, max 48.3M, 42.3M free. Jul 2 00:23:32.968398 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:23:33.637570 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:23:32.987774 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:23:32.988310 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:23:33.733153 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:33.733364 kernel: ACPI: bus type drm_connector registered Jul 2 00:23:33.735397 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:23:33.735531 systemd[1]: Stopped verity-setup.service. Jul 2 00:23:33.739146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:33.743285 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:33.744668 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:23:33.746176 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:23:33.747596 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:23:33.748996 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:23:33.750377 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:23:33.832306 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:23:33.833721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:33.835473 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:23:33.835695 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:23:33.837453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:33.837658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:33.839413 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:33.839600 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:33.841077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:33.841474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:33.843219 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:23:33.843446 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:23:33.845280 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:33.845494 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:33.847016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:33.848807 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:23:33.850733 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:23:33.865030 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:23:33.879263 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:23:33.941068 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:23:33.942446 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:23:33.942487 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:33.944707 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:23:33.947265 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:23:33.949548 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:23:33.950841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:34.055358 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:23:34.058128 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:23:34.059378 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:34.065138 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:23:34.066538 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:34.074659 systemd-journald[1116]: Time spent on flushing to /var/log/journal/fdc69c709e3a4b1f8eab47f876bedb0c is 17.249ms for 986 entries. Jul 2 00:23:34.074659 systemd-journald[1116]: System Journal (/var/log/journal/fdc69c709e3a4b1f8eab47f876bedb0c) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:23:35.222688 systemd-journald[1116]: Received client request to flush runtime journal. Jul 2 00:23:35.222762 kernel: loop0: detected capacity change from 0 to 210664 Jul 2 00:23:35.222794 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:23:35.222928 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:23:35.222949 kernel: loop1: detected capacity change from 0 to 139904 Jul 2 00:23:35.222969 kernel: loop2: detected capacity change from 0 to 80568 Jul 2 00:23:35.222996 kernel: loop3: detected capacity change from 0 to 210664 Jul 2 00:23:35.223017 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:23:35.223037 kernel: loop5: detected capacity change from 0 to 80568 Jul 2 00:23:34.075241 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:34.081746 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:23:34.088429 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:23:34.089893 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:23:34.091662 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:23:34.122189 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:34.144266 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:34.228552 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:23:34.240079 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:23:34.641471 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:23:34.695440 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:23:34.707318 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:23:35.062742 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:23:35.113716 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:23:35.114462 (sd-merge)[1180]: Merged extensions into '/usr'. Jul 2 00:23:35.122458 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:23:35.207327 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:23:35.207339 systemd[1]: Reloading... Jul 2 00:23:35.283047 zram_generator::config[1210]: No configuration found. Jul 2 00:23:35.353043 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:23:35.456382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:35.522440 systemd[1]: Reloading finished in 314 ms. Jul 2 00:23:35.570064 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:23:35.573099 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:23:35.575265 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:23:35.577302 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:23:35.579308 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:23:35.581260 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:23:35.602610 systemd[1]: Starting ensure-sysext.service... Jul 2 00:23:35.605955 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:35.610368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:35.619089 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:23:35.619105 systemd[1]: Reloading... Jul 2 00:23:35.658586 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jul 2 00:23:35.659088 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jul 2 00:23:35.671061 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:23:35.671479 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:23:35.672759 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:23:35.673186 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 2 00:23:35.673286 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 2 00:23:35.677615 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:35.677634 systemd-tmpfiles[1251]: Skipping /boot Jul 2 00:23:35.696641 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:35.696664 systemd-tmpfiles[1251]: Skipping /boot Jul 2 00:23:35.699129 zram_generator::config[1278]: No configuration found. Jul 2 00:23:35.812128 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:35.862118 systemd[1]: Reloading finished in 242 ms. Jul 2 00:23:35.896096 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:35.969149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:35.988637 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:36.024142 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:23:36.026942 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:23:36.035330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:36.095696 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:23:36.100716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:36.100949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:36.102602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:36.138296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:36.141202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:36.142737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:36.145383 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:23:36.146791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:36.148301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:36.148563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:36.150509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:36.150698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:36.152647 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:36.152824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:36.161721 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:36.163040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:36.175519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:36.185067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:36.190704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:36.192165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:36.192322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:36.193778 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:23:36.196280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:23:36.198586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:36.201381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:36.203242 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:23:36.205547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:36.205794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:36.208514 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:36.208877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:36.212833 augenrules[1351]: No rules Jul 2 00:23:36.214661 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:36.227296 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:36.227580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:36.237426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:36.243695 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:36.246032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:36.251016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:36.251625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:36.251763 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:36.253335 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:23:36.255194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:36.255376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:36.257090 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:36.257374 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:36.258918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:36.259103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:36.261521 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:36.261743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:36.266772 systemd[1]: Finished ensure-sysext.service. Jul 2 00:23:36.275418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:36.275541 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:36.289460 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:23:36.290817 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:23:36.295176 systemd-resolved[1325]: Positive Trust Anchors: Jul 2 00:23:36.295196 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:36.295226 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:36.299344 systemd-resolved[1325]: Defaulting to hostname 'linux'. Jul 2 00:23:36.301599 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:36.302990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:36.352899 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:23:36.371613 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:36.374976 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:23:36.376738 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:23:36.378620 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:23:36.402679 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:23:36.406355 systemd-udevd[1375]: Using default interface naming scheme 'v255'. Jul 2 00:23:36.433028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:36.447430 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:36.479148 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1387) Jul 2 00:23:36.488763 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:23:36.510145 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1395) Jul 2 00:23:36.523451 systemd-networkd[1386]: lo: Link UP Jul 2 00:23:36.523464 systemd-networkd[1386]: lo: Gained carrier Jul 2 00:23:36.525612 systemd-networkd[1386]: Enumeration completed Jul 2 00:23:36.525736 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:36.526416 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:36.526430 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:36.527725 systemd-networkd[1386]: eth0: Link UP Jul 2 00:23:36.527738 systemd-networkd[1386]: eth0: Gained carrier Jul 2 00:23:36.527755 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:36.553830 systemd[1]: Reached target network.target - Network. Jul 2 00:23:36.560499 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:23:36.571236 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:23:36.574159 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 00:23:36.574259 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 00:23:37.297388 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:23:36.578242 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Jul 2 00:23:36.579861 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:37.297266 systemd-resolved[1325]: Clock change detected. Flushing caches. Jul 2 00:23:37.297507 systemd-timesyncd[1373]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:23:37.297632 systemd-timesyncd[1373]: Initial clock synchronization to Tue 2024-07-02 00:23:37.297149 UTC. Jul 2 00:23:37.302793 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:23:37.309897 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 00:23:37.311147 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:23:37.346398 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:23:37.374891 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:23:37.376260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:37.394126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:37.394475 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:37.400143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:37.510591 kernel: kvm_amd: TSC scaling supported Jul 2 00:23:37.510732 kernel: kvm_amd: Nested Virtualization enabled Jul 2 00:23:37.510782 kernel: kvm_amd: Nested Paging enabled Jul 2 00:23:37.512060 kernel: kvm_amd: LBR virtualization supported Jul 2 00:23:37.512120 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 2 00:23:37.512927 kernel: kvm_amd: Virtual GIF supported Jul 2 00:23:37.548901 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:23:37.560114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:37.588627 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:23:37.604082 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:23:37.616669 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:37.653889 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:23:37.671483 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:37.672743 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:37.674073 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:23:37.675557 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:23:37.677219 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:23:37.678561 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:23:37.679929 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:23:37.681327 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:23:37.681359 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:37.682405 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:37.684612 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:23:37.688747 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:23:37.699573 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:23:37.702608 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:23:37.704512 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:23:37.705814 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:37.707232 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:37.708531 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:37.708567 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:37.710259 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:23:37.712920 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:23:37.716083 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:37.719040 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:23:37.725081 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:23:37.726332 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:23:37.729874 jq[1432]: false Jul 2 00:23:37.730063 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:23:37.733129 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:23:37.736076 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:23:37.739026 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:23:37.746970 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:23:37.748821 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:23:37.749633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:23:37.750577 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:23:37.752826 dbus-daemon[1431]: [system] SELinux support is enabled Jul 2 00:23:37.753291 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:23:37.755279 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:23:37.758519 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:23:37.759415 extend-filesystems[1433]: Found loop3 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found loop4 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found loop5 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found sr0 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found vda Jul 2 00:23:37.764451 extend-filesystems[1433]: Found vda1 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found vda2 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found vda3 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found usr Jul 2 00:23:37.764451 extend-filesystems[1433]: Found vda4 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found vda6 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found vda7 Jul 2 00:23:37.764451 extend-filesystems[1433]: Found vda9 Jul 2 00:23:37.764451 extend-filesystems[1433]: Checking size of /dev/vda9 Jul 2 00:23:37.803495 extend-filesystems[1433]: Resized partition /dev/vda9 Jul 2 00:23:37.768279 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:23:37.805348 update_engine[1442]: I0702 00:23:37.791733 1442 main.cc:92] Flatcar Update Engine starting Jul 2 00:23:37.805348 update_engine[1442]: I0702 00:23:37.793165 1442 update_check_scheduler.cc:74] Next update check in 3m46s Jul 2 00:23:37.807013 extend-filesystems[1459]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:23:37.811960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1383) Jul 2 00:23:37.769898 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:23:37.813035 jq[1444]: true Jul 2 00:23:37.773773 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:23:37.774133 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:23:37.780278 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:23:37.780609 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:23:37.815745 jq[1453]: true Jul 2 00:23:37.827330 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:23:37.829565 tar[1450]: linux-amd64/helm Jul 2 00:23:37.832414 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:23:37.834615 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:23:37.834652 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:23:37.838599 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:23:37.838649 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:23:37.846872 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:23:37.851262 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:23:37.896890 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:23:37.896927 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:23:37.897387 systemd-logind[1441]: New seat seat0. Jul 2 00:23:37.902195 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:23:37.998888 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:23:38.063940 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:23:38.098880 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:23:38.100744 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:23:38.111455 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:23:38.138416 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:23:38.138727 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:23:38.147535 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:23:38.217351 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:23:38.232358 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:23:38.232358 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:23:38.232358 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:23:38.228012 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:23:38.238440 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jul 2 00:23:38.234359 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:23:38.235484 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:23:38.242840 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:23:38.243139 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:23:38.259694 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:38.261044 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:23:38.267430 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:23:38.473057 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:23:38.483660 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:58990.service - OpenSSH per-connection server daemon (10.0.0.1:58990). Jul 2 00:23:38.551007 sshd[1516]: Accepted publickey for core from 10.0.0.1 port 58990 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:23:38.554279 sshd[1516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:38.563346 containerd[1458]: time="2024-07-02T00:23:38.563058151Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:23:38.565316 tar[1450]: linux-amd64/LICENSE Jul 2 00:23:38.565316 tar[1450]: linux-amd64/README.md Jul 2 00:23:38.583538 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:23:38.584637 systemd-logind[1441]: New session 1 of user core. Jul 2 00:23:38.586678 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:23:38.588667 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:23:38.595392 containerd[1458]: time="2024-07-02T00:23:38.595320237Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:23:38.595392 containerd[1458]: time="2024-07-02T00:23:38.595389177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:38.597993 containerd[1458]: time="2024-07-02T00:23:38.597827461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:38.597993 containerd[1458]: time="2024-07-02T00:23:38.597941445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:38.598428 containerd[1458]: time="2024-07-02T00:23:38.598381681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:38.598428 containerd[1458]: time="2024-07-02T00:23:38.598415454Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:23:38.598598 containerd[1458]: time="2024-07-02T00:23:38.598570936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:38.598706 containerd[1458]: time="2024-07-02T00:23:38.598667838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:38.598706 containerd[1458]: time="2024-07-02T00:23:38.598695720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:38.598827 containerd[1458]: time="2024-07-02T00:23:38.598801829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:38.599200 containerd[1458]: time="2024-07-02T00:23:38.599141055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:38.599200 containerd[1458]: time="2024-07-02T00:23:38.599189115Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:23:38.599271 containerd[1458]: time="2024-07-02T00:23:38.599210305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:38.599436 containerd[1458]: time="2024-07-02T00:23:38.599394902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:38.599436 containerd[1458]: time="2024-07-02T00:23:38.599421071Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:23:38.599532 containerd[1458]: time="2024-07-02T00:23:38.599508495Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:23:38.599567 containerd[1458]: time="2024-07-02T00:23:38.599533031Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:23:38.606487 containerd[1458]: time="2024-07-02T00:23:38.606264840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:23:38.606487 containerd[1458]: time="2024-07-02T00:23:38.606319172Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:23:38.606487 containerd[1458]: time="2024-07-02T00:23:38.606340412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:23:38.606487 containerd[1458]: time="2024-07-02T00:23:38.606385727Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:23:38.606487 containerd[1458]: time="2024-07-02T00:23:38.606406636Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:23:38.606487 containerd[1458]: time="2024-07-02T00:23:38.606422726Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:23:38.606487 containerd[1458]: time="2024-07-02T00:23:38.606439317Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606629875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606652397Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606668758Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606686481Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606705687Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606726817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606742526Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606754929Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606776600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606789715Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606803170Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:23:38.606837 containerd[1458]: time="2024-07-02T00:23:38.606818208Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:23:38.607267 containerd[1458]: time="2024-07-02T00:23:38.606959393Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:23:38.607267 containerd[1458]: time="2024-07-02T00:23:38.607223479Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:23:38.607267 containerd[1458]: time="2024-07-02T00:23:38.607249818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607267 containerd[1458]: time="2024-07-02T00:23:38.607264175Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:23:38.607388 containerd[1458]: time="2024-07-02T00:23:38.607288110Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:23:38.607388 containerd[1458]: time="2024-07-02T00:23:38.607343674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607388 containerd[1458]: time="2024-07-02T00:23:38.607356538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607388 containerd[1458]: time="2024-07-02T00:23:38.607368891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607388 containerd[1458]: time="2024-07-02T00:23:38.607380042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607609 containerd[1458]: time="2024-07-02T00:23:38.607398908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607609 containerd[1458]: time="2024-07-02T00:23:38.607413645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607609 containerd[1458]: time="2024-07-02T00:23:38.607426670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607609 containerd[1458]: time="2024-07-02T00:23:38.607439494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607609 containerd[1458]: time="2024-07-02T00:23:38.607451667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:23:38.607609 containerd[1458]: time="2024-07-02T00:23:38.607605175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607792 containerd[1458]: time="2024-07-02T00:23:38.607619862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607792 containerd[1458]: time="2024-07-02T00:23:38.607631905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607792 containerd[1458]: time="2024-07-02T00:23:38.607644308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607792 containerd[1458]: time="2024-07-02T00:23:38.607656932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607792 containerd[1458]: time="2024-07-02T00:23:38.607670497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607792 containerd[1458]: time="2024-07-02T00:23:38.607681628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.607792 containerd[1458]: time="2024-07-02T00:23:38.607692609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:23:38.608013 containerd[1458]: time="2024-07-02T00:23:38.607966593Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:23:38.608262 containerd[1458]: time="2024-07-02T00:23:38.608018951Z" level=info msg="Connect containerd service" Jul 2 00:23:38.608262 containerd[1458]: time="2024-07-02T00:23:38.608049298Z" level=info msg="using legacy CRI server" Jul 2 00:23:38.608262 containerd[1458]: time="2024-07-02T00:23:38.608055540Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:23:38.608262 containerd[1458]: time="2024-07-02T00:23:38.608131091Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:23:38.608830 containerd[1458]: time="2024-07-02T00:23:38.608778646Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:23:38.608814 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:23:38.609005 containerd[1458]: time="2024-07-02T00:23:38.608840773Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:23:38.609005 containerd[1458]: time="2024-07-02T00:23:38.608873234Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:23:38.609005 containerd[1458]: time="2024-07-02T00:23:38.608884044Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:23:38.609005 containerd[1458]: time="2024-07-02T00:23:38.608896197Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:23:38.609005 containerd[1458]: time="2024-07-02T00:23:38.608923618Z" level=info msg="Start subscribing containerd event" Jul 2 00:23:38.609134 containerd[1458]: time="2024-07-02T00:23:38.609033364Z" level=info msg="Start recovering state" Jul 2 00:23:38.609189 containerd[1458]: time="2024-07-02T00:23:38.609148700Z" level=info msg="Start event monitor" Jul 2 00:23:38.609212 containerd[1458]: time="2024-07-02T00:23:38.609189146Z" level=info msg="Start snapshots syncer" Jul 2 00:23:38.609212 containerd[1458]: time="2024-07-02T00:23:38.609204766Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:23:38.609249 containerd[1458]: time="2024-07-02T00:23:38.609217830Z" level=info msg="Start streaming server" Jul 2 00:23:38.610514 containerd[1458]: time="2024-07-02T00:23:38.610472093Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:23:38.610624 containerd[1458]: time="2024-07-02T00:23:38.610536785Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:23:38.610624 containerd[1458]: time="2024-07-02T00:23:38.610597518Z" level=info msg="containerd successfully booted in 0.055498s" Jul 2 00:23:38.610653 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:23:38.624453 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:23:38.629896 (systemd)[1526]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:38.758727 systemd[1526]: Queued start job for default target default.target. Jul 2 00:23:38.767553 systemd[1526]: Created slice app.slice - User Application Slice. Jul 2 00:23:38.767584 systemd[1526]: Reached target paths.target - Paths. Jul 2 00:23:38.767599 systemd[1526]: Reached target timers.target - Timers. Jul 2 00:23:38.769481 systemd[1526]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:23:38.783972 systemd[1526]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:23:38.784120 systemd[1526]: Reached target sockets.target - Sockets. Jul 2 00:23:38.784141 systemd[1526]: Reached target basic.target - Basic System. Jul 2 00:23:38.784194 systemd[1526]: Reached target default.target - Main User Target. Jul 2 00:23:38.784230 systemd[1526]: Startup finished in 144ms. Jul 2 00:23:38.784937 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:23:38.788078 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:23:38.852850 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:59004.service - OpenSSH per-connection server daemon (10.0.0.1:59004). Jul 2 00:23:38.891850 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 59004 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:23:38.893641 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:38.898802 systemd-logind[1441]: New session 2 of user core. Jul 2 00:23:38.909159 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:23:38.970060 sshd[1537]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:38.978395 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:59004.service: Deactivated successfully. Jul 2 00:23:38.980816 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:23:38.982951 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:23:38.993360 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:59020.service - OpenSSH per-connection server daemon (10.0.0.1:59020). Jul 2 00:23:38.996213 systemd-logind[1441]: Removed session 2. Jul 2 00:23:39.030950 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 59020 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:23:39.032845 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:39.037545 systemd-logind[1441]: New session 3 of user core. Jul 2 00:23:39.048221 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:23:39.102489 systemd-networkd[1386]: eth0: Gained IPv6LL Jul 2 00:23:39.106644 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:23:39.109035 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:23:39.111047 sshd[1544]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:39.121267 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:23:39.124003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:39.126462 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:23:39.128120 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:59020.service: Deactivated successfully. Jul 2 00:23:39.131076 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:23:39.133586 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:23:39.137342 systemd-logind[1441]: Removed session 3. Jul 2 00:23:39.152541 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:23:39.152845 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:23:39.154630 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:23:39.159305 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:23:40.189684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:40.191959 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:23:40.194568 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:40.195971 systemd[1]: Startup finished in 1.321s (kernel) + 8.480s (initrd) + 7.320s (userspace) = 17.123s. Jul 2 00:23:40.982659 kubelet[1573]: E0702 00:23:40.982551 1573 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:40.988020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:40.988307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:40.988774 systemd[1]: kubelet.service: Consumed 1.695s CPU time. Jul 2 00:23:49.124237 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:57264.service - OpenSSH per-connection server daemon (10.0.0.1:57264). Jul 2 00:23:49.162811 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 57264 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:23:49.164765 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:49.169136 systemd-logind[1441]: New session 4 of user core. Jul 2 00:23:49.184058 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:23:49.240667 sshd[1587]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:49.256618 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:57264.service: Deactivated successfully. Jul 2 00:23:49.258841 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:23:49.260984 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:23:49.272283 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:57266.service - OpenSSH per-connection server daemon (10.0.0.1:57266). Jul 2 00:23:49.273384 systemd-logind[1441]: Removed session 4. Jul 2 00:23:49.304462 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 57266 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:23:49.306430 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:49.311245 systemd-logind[1441]: New session 5 of user core. Jul 2 00:23:49.325229 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:23:49.378142 sshd[1594]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:49.388054 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:57266.service: Deactivated successfully. Jul 2 00:23:49.390721 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:23:49.392480 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:23:49.394201 systemd[1]: Started sshd@5-10.0.0.122:22-10.0.0.1:57274.service - OpenSSH per-connection server daemon (10.0.0.1:57274). Jul 2 00:23:49.395227 systemd-logind[1441]: Removed session 5. Jul 2 00:23:49.431930 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 57274 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:23:49.433533 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:49.438057 systemd-logind[1441]: New session 6 of user core. Jul 2 00:23:49.449056 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:23:49.508441 sshd[1602]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:49.520056 systemd[1]: sshd@5-10.0.0.122:22-10.0.0.1:57274.service: Deactivated successfully. Jul 2 00:23:49.522177 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:23:49.523810 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:23:49.532197 systemd[1]: Started sshd@6-10.0.0.122:22-10.0.0.1:57288.service - OpenSSH per-connection server daemon (10.0.0.1:57288). Jul 2 00:23:49.533440 systemd-logind[1441]: Removed session 6. Jul 2 00:23:49.563851 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 57288 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:23:49.565718 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:49.571062 systemd-logind[1441]: New session 7 of user core. Jul 2 00:23:49.588171 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:23:49.648233 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:23:49.648595 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:49.994762 sudo[1612]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:49.997646 sshd[1609]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:50.010546 systemd[1]: sshd@6-10.0.0.122:22-10.0.0.1:57288.service: Deactivated successfully. Jul 2 00:23:50.012686 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:23:50.014660 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:23:50.015918 systemd[1]: Started sshd@7-10.0.0.122:22-10.0.0.1:57292.service - OpenSSH per-connection server daemon (10.0.0.1:57292). Jul 2 00:23:50.016818 systemd-logind[1441]: Removed session 7. Jul 2 00:23:50.057973 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 57292 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:23:50.060044 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:50.066555 systemd-logind[1441]: New session 8 of user core. Jul 2 00:23:50.076237 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:23:50.138403 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:23:50.138812 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:50.144409 sudo[1621]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:50.153360 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:23:50.153789 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:50.176365 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:50.178401 auditctl[1624]: No rules Jul 2 00:23:50.179013 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:23:50.179312 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:50.182670 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:50.226561 augenrules[1642]: No rules Jul 2 00:23:50.228784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:50.230390 sudo[1620]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:50.232911 sshd[1617]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:50.245413 systemd[1]: sshd@7-10.0.0.122:22-10.0.0.1:57292.service: Deactivated successfully. Jul 2 00:23:50.247674 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:23:50.249640 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:23:50.257226 systemd[1]: Started sshd@8-10.0.0.122:22-10.0.0.1:57302.service - OpenSSH per-connection server daemon (10.0.0.1:57302). Jul 2 00:23:50.258273 systemd-logind[1441]: Removed session 8. Jul 2 00:23:50.289177 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 57302 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:23:50.290822 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:50.295940 systemd-logind[1441]: New session 9 of user core. Jul 2 00:23:50.306235 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:23:50.364009 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:23:50.364315 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:50.492228 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:23:50.492333 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:23:50.908238 dockerd[1664]: time="2024-07-02T00:23:50.908155976Z" level=info msg="Starting up" Jul 2 00:23:51.126403 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:51.136030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:51.405530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:51.411363 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:51.769906 kubelet[1685]: E0702 00:23:51.769589 1685 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:51.777927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:51.778162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:53.042289 dockerd[1664]: time="2024-07-02T00:23:53.042185628Z" level=info msg="Loading containers: start." Jul 2 00:23:54.300901 kernel: Initializing XFRM netlink socket Jul 2 00:23:54.395724 systemd-networkd[1386]: docker0: Link UP Jul 2 00:23:54.859916 dockerd[1664]: time="2024-07-02T00:23:54.859868651Z" level=info msg="Loading containers: done." Jul 2 00:23:54.936505 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3152186317-merged.mount: Deactivated successfully. Jul 2 00:23:55.262058 dockerd[1664]: time="2024-07-02T00:23:55.261971254Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:23:55.262294 dockerd[1664]: time="2024-07-02T00:23:55.262269564Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:23:55.262451 dockerd[1664]: time="2024-07-02T00:23:55.262424424Z" level=info msg="Daemon has completed initialization" Jul 2 00:23:55.894347 dockerd[1664]: time="2024-07-02T00:23:55.894221781Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:23:55.894562 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:23:56.868274 containerd[1458]: time="2024-07-02T00:23:56.868221284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 00:24:00.341746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414689879.mount: Deactivated successfully. Jul 2 00:24:01.876349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:24:01.891078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:02.048949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:02.056291 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:02.165764 kubelet[1842]: E0702 00:24:02.165571 1842 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:02.170386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:02.170629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:04.372566 containerd[1458]: time="2024-07-02T00:24:04.372466924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:04.422610 containerd[1458]: time="2024-07-02T00:24:04.422507554Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jul 2 00:24:04.464107 containerd[1458]: time="2024-07-02T00:24:04.464005778Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:04.482806 containerd[1458]: time="2024-07-02T00:24:04.482718995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:04.484183 containerd[1458]: time="2024-07-02T00:24:04.484055202Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 7.615777743s" Jul 2 00:24:04.484183 containerd[1458]: time="2024-07-02T00:24:04.484110606Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 00:24:04.509022 containerd[1458]: time="2024-07-02T00:24:04.508979712Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 00:24:12.376421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:24:12.387243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:12.547159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:12.552111 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:14.101412 kubelet[1915]: E0702 00:24:14.101346 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:14.105514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:14.105754 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:16.562357 containerd[1458]: time="2024-07-02T00:24:16.562267835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.574813 containerd[1458]: time="2024-07-02T00:24:16.574727076Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jul 2 00:24:16.583009 containerd[1458]: time="2024-07-02T00:24:16.582939828Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.601022 containerd[1458]: time="2024-07-02T00:24:16.600833934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.602404 containerd[1458]: time="2024-07-02T00:24:16.602336450Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 12.0933078s" Jul 2 00:24:16.602503 containerd[1458]: time="2024-07-02T00:24:16.602405314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 00:24:16.635971 containerd[1458]: time="2024-07-02T00:24:16.635884523Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 00:24:21.090462 containerd[1458]: time="2024-07-02T00:24:21.090370978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:21.104455 containerd[1458]: time="2024-07-02T00:24:21.104399681Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jul 2 00:24:21.148376 containerd[1458]: time="2024-07-02T00:24:21.148289376Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:21.208516 containerd[1458]: time="2024-07-02T00:24:21.208443499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:21.209907 containerd[1458]: time="2024-07-02T00:24:21.209837001Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 4.57388171s" Jul 2 00:24:21.209907 containerd[1458]: time="2024-07-02T00:24:21.209899788Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 00:24:21.240205 containerd[1458]: time="2024-07-02T00:24:21.240156820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 00:24:23.103321 update_engine[1442]: I0702 00:24:23.103247 1442 update_attempter.cc:509] Updating boot flags... Jul 2 00:24:23.198918 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1946) Jul 2 00:24:23.266900 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1947) Jul 2 00:24:23.327328 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1947) Jul 2 00:24:24.126451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:24:24.141112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:24.312956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:24.318543 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:24.360876 kubelet[1963]: E0702 00:24:24.360810 1963 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:24.365612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:24.365895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:26.005773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020229098.mount: Deactivated successfully. Jul 2 00:24:27.008605 containerd[1458]: time="2024-07-02T00:24:27.008490692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:27.069236 containerd[1458]: time="2024-07-02T00:24:27.069116722Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jul 2 00:24:27.096927 containerd[1458]: time="2024-07-02T00:24:27.096825715Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:27.210481 containerd[1458]: time="2024-07-02T00:24:27.210405873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:27.211361 containerd[1458]: time="2024-07-02T00:24:27.211321551Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 5.97111131s" Jul 2 00:24:27.211419 containerd[1458]: time="2024-07-02T00:24:27.211366962Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 00:24:27.238348 containerd[1458]: time="2024-07-02T00:24:27.238272766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:24:31.707232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021292920.mount: Deactivated successfully. Jul 2 00:24:34.376453 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 00:24:34.387080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:34.551043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:34.555910 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:34.601049 kubelet[2003]: E0702 00:24:34.600898 2003 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:34.605823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:34.606108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:38.107146 containerd[1458]: time="2024-07-02T00:24:38.106986015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:38.110743 containerd[1458]: time="2024-07-02T00:24:38.110033269Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:24:38.114506 containerd[1458]: time="2024-07-02T00:24:38.114350338Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:38.119261 containerd[1458]: time="2024-07-02T00:24:38.119177416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:38.120630 containerd[1458]: time="2024-07-02T00:24:38.120439396Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 10.882093389s" Jul 2 00:24:38.120630 containerd[1458]: time="2024-07-02T00:24:38.120481663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:24:38.157227 containerd[1458]: time="2024-07-02T00:24:38.157177276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:24:38.836920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473420928.mount: Deactivated successfully. Jul 2 00:24:38.846650 containerd[1458]: time="2024-07-02T00:24:38.846433885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:38.848303 containerd[1458]: time="2024-07-02T00:24:38.848084052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:24:38.850450 containerd[1458]: time="2024-07-02T00:24:38.850359619Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:38.854588 containerd[1458]: time="2024-07-02T00:24:38.854481312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:38.856127 containerd[1458]: time="2024-07-02T00:24:38.855579103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 698.354581ms" Jul 2 00:24:38.856127 containerd[1458]: time="2024-07-02T00:24:38.855624546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:24:38.888387 containerd[1458]: time="2024-07-02T00:24:38.888277671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 00:24:39.879218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675895844.mount: Deactivated successfully. Jul 2 00:24:42.121236 containerd[1458]: time="2024-07-02T00:24:42.121111953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:42.184668 containerd[1458]: time="2024-07-02T00:24:42.184540941Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jul 2 00:24:42.250264 containerd[1458]: time="2024-07-02T00:24:42.250160409Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:42.320522 containerd[1458]: time="2024-07-02T00:24:42.320410727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:42.322567 containerd[1458]: time="2024-07-02T00:24:42.322436415Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.434069851s" Jul 2 00:24:42.322567 containerd[1458]: time="2024-07-02T00:24:42.322554291Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 00:24:44.579673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:44.648172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:44.668878 systemd[1]: Reloading requested from client PID 2188 ('systemctl') (unit session-9.scope)... Jul 2 00:24:44.668900 systemd[1]: Reloading... Jul 2 00:24:44.750908 zram_generator::config[2225]: No configuration found. Jul 2 00:24:45.266748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:24:45.352306 systemd[1]: Reloading finished in 682 ms. Jul 2 00:24:45.412467 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:24:45.412591 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:24:45.413010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:45.415054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:45.577631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:45.583626 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:24:45.629125 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:45.629125 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:24:45.629125 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:45.630783 kubelet[2274]: I0702 00:24:45.630720 2274 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:24:45.946233 kubelet[2274]: I0702 00:24:45.946169 2274 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:24:45.946233 kubelet[2274]: I0702 00:24:45.946210 2274 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:24:45.946482 kubelet[2274]: I0702 00:24:45.946456 2274 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:24:45.967092 kubelet[2274]: I0702 00:24:45.967026 2274 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:24:45.970201 kubelet[2274]: E0702 00:24:45.969723 2274 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:45.988646 kubelet[2274]: I0702 00:24:45.988573 2274 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:24:45.992115 kubelet[2274]: I0702 00:24:45.992017 2274 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:24:45.992317 kubelet[2274]: I0702 00:24:45.992075 2274 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:24:45.992485 kubelet[2274]: I0702 00:24:45.992339 2274 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:24:45.992485 kubelet[2274]: I0702 00:24:45.992353 2274 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:24:45.992573 kubelet[2274]: I0702 00:24:45.992558 2274 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:45.993828 kubelet[2274]: I0702 00:24:45.993795 2274 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:24:45.993828 kubelet[2274]: I0702 00:24:45.993819 2274 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:24:45.993960 kubelet[2274]: I0702 00:24:45.993877 2274 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:24:45.993960 kubelet[2274]: I0702 00:24:45.993922 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:24:45.995333 kubelet[2274]: W0702 00:24:45.995259 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:45.995410 kubelet[2274]: E0702 00:24:45.995341 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:45.995762 kubelet[2274]: W0702 00:24:45.995717 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:45.995824 kubelet[2274]: E0702 00:24:45.995770 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:45.999885 kubelet[2274]: I0702 00:24:45.999785 2274 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:24:46.001918 kubelet[2274]: I0702 00:24:46.001881 2274 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:24:46.002010 kubelet[2274]: W0702 00:24:46.001985 2274 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:24:46.032210 kubelet[2274]: I0702 00:24:46.032135 2274 server.go:1264] "Started kubelet" Jul 2 00:24:46.032392 kubelet[2274]: I0702 00:24:46.032240 2274 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:24:46.033473 kubelet[2274]: I0702 00:24:46.033448 2274 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:24:46.034068 kubelet[2274]: I0702 00:24:46.033598 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:24:46.034790 kubelet[2274]: I0702 00:24:46.034678 2274 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:24:46.034975 kubelet[2274]: I0702 00:24:46.034941 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:24:46.037608 kubelet[2274]: E0702 00:24:46.037515 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:46.037608 kubelet[2274]: I0702 00:24:46.037584 2274 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:24:46.038033 kubelet[2274]: I0702 00:24:46.037684 2274 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:24:46.038033 kubelet[2274]: I0702 00:24:46.037788 2274 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:24:46.038518 kubelet[2274]: E0702 00:24:46.038487 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="200ms" Jul 2 00:24:46.039011 kubelet[2274]: E0702 00:24:46.038616 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.122:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.122:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de3da4df481952 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 00:24:46.032083282 +0000 UTC m=+0.442872698,LastTimestamp:2024-07-02 00:24:46.032083282 +0000 UTC m=+0.442872698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 00:24:46.039398 kubelet[2274]: I0702 00:24:46.039381 2274 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:24:46.039576 kubelet[2274]: I0702 00:24:46.039557 2274 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:24:46.039938 kubelet[2274]: E0702 00:24:46.039387 2274 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:24:46.046203 kubelet[2274]: W0702 00:24:46.038221 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:46.046203 kubelet[2274]: E0702 00:24:46.046197 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:46.047533 kubelet[2274]: I0702 00:24:46.047495 2274 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:24:46.070330 kubelet[2274]: I0702 00:24:46.070286 2274 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:24:46.070330 kubelet[2274]: I0702 00:24:46.070315 2274 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:24:46.070513 kubelet[2274]: I0702 00:24:46.070357 2274 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:46.139928 kubelet[2274]: I0702 00:24:46.139761 2274 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:24:46.140385 kubelet[2274]: E0702 00:24:46.140338 2274 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jul 2 00:24:46.239679 kubelet[2274]: E0702 00:24:46.239500 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="400ms" Jul 2 00:24:46.341899 kubelet[2274]: I0702 00:24:46.341823 2274 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:24:46.342475 kubelet[2274]: E0702 00:24:46.342423 2274 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jul 2 00:24:46.641039 kubelet[2274]: E0702 00:24:46.640940 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="800ms" Jul 2 00:24:46.675812 kubelet[2274]: I0702 00:24:46.675715 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:24:46.677463 kubelet[2274]: I0702 00:24:46.677437 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:24:46.677531 kubelet[2274]: I0702 00:24:46.677486 2274 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:24:46.677531 kubelet[2274]: I0702 00:24:46.677511 2274 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:24:46.677609 kubelet[2274]: E0702 00:24:46.677584 2274 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:24:46.678939 kubelet[2274]: W0702 00:24:46.678813 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:46.678939 kubelet[2274]: E0702 00:24:46.678922 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:46.742847 kubelet[2274]: I0702 00:24:46.742763 2274 policy_none.go:49] "None policy: Start" Jul 2 00:24:46.743634 kubelet[2274]: I0702 00:24:46.743598 2274 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:24:46.743634 kubelet[2274]: I0702 00:24:46.743627 2274 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:24:46.744196 kubelet[2274]: I0702 00:24:46.744146 2274 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:24:46.744696 kubelet[2274]: E0702 00:24:46.744644 2274 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jul 2 00:24:46.778554 kubelet[2274]: E0702 00:24:46.777777 2274 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:24:46.797639 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:24:46.812016 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:24:46.817371 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:24:46.826927 kubelet[2274]: I0702 00:24:46.826891 2274 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:24:46.827296 kubelet[2274]: I0702 00:24:46.827234 2274 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:24:46.827445 kubelet[2274]: I0702 00:24:46.827416 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:24:46.828645 kubelet[2274]: E0702 00:24:46.828597 2274 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:24:46.978602 kubelet[2274]: I0702 00:24:46.978396 2274 topology_manager.go:215] "Topology Admit Handler" podUID="1d4b0e2cd0b1da6c526067ab8c5f910e" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:24:46.980112 kubelet[2274]: I0702 00:24:46.980081 2274 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:24:46.981200 kubelet[2274]: I0702 00:24:46.981154 2274 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:24:46.988808 systemd[1]: Created slice kubepods-burstable-pod1d4b0e2cd0b1da6c526067ab8c5f910e.slice - libcontainer container kubepods-burstable-pod1d4b0e2cd0b1da6c526067ab8c5f910e.slice. Jul 2 00:24:47.006904 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jul 2 00:24:47.018578 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jul 2 00:24:47.044754 kubelet[2274]: I0702 00:24:47.044670 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:47.044754 kubelet[2274]: I0702 00:24:47.044731 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:47.044754 kubelet[2274]: I0702 00:24:47.044757 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:47.044754 kubelet[2274]: I0702 00:24:47.044776 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:47.045051 kubelet[2274]: I0702 00:24:47.044796 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d4b0e2cd0b1da6c526067ab8c5f910e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1d4b0e2cd0b1da6c526067ab8c5f910e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:24:47.045051 kubelet[2274]: I0702 00:24:47.044870 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d4b0e2cd0b1da6c526067ab8c5f910e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1d4b0e2cd0b1da6c526067ab8c5f910e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:24:47.045051 kubelet[2274]: I0702 00:24:47.044908 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:24:47.045051 kubelet[2274]: I0702 00:24:47.044930 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d4b0e2cd0b1da6c526067ab8c5f910e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1d4b0e2cd0b1da6c526067ab8c5f910e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:24:47.045051 kubelet[2274]: I0702 00:24:47.044994 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:47.197478 kubelet[2274]: W0702 00:24:47.197379 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:47.197478 kubelet[2274]: E0702 00:24:47.197472 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:47.303398 kubelet[2274]: E0702 00:24:47.303245 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:47.304089 containerd[1458]: time="2024-07-02T00:24:47.304017821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1d4b0e2cd0b1da6c526067ab8c5f910e,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:47.316436 kubelet[2274]: E0702 00:24:47.316345 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:47.317031 containerd[1458]: time="2024-07-02T00:24:47.316963071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:47.321305 kubelet[2274]: E0702 00:24:47.321255 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:47.321685 containerd[1458]: time="2024-07-02T00:24:47.321649192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:47.343777 kubelet[2274]: W0702 00:24:47.343663 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:47.343777 kubelet[2274]: E0702 00:24:47.343766 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:47.384519 kubelet[2274]: W0702 00:24:47.384431 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:47.384519 kubelet[2274]: E0702 00:24:47.384522 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:47.441649 kubelet[2274]: E0702 00:24:47.441560 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="1.6s" Jul 2 00:24:47.546777 kubelet[2274]: I0702 00:24:47.546713 2274 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:24:47.547168 kubelet[2274]: E0702 00:24:47.547135 2274 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jul 2 00:24:47.839514 kubelet[2274]: W0702 00:24:47.839366 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:47.839514 kubelet[2274]: E0702 00:24:47.839431 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:48.119286 kubelet[2274]: E0702 00:24:48.117158 2274 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:48.482189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1756078997.mount: Deactivated successfully. Jul 2 00:24:48.503170 containerd[1458]: time="2024-07-02T00:24:48.502987050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:48.507430 containerd[1458]: time="2024-07-02T00:24:48.507306292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:24:48.508242 containerd[1458]: time="2024-07-02T00:24:48.508172382Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:48.519291 containerd[1458]: time="2024-07-02T00:24:48.512607178Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:48.519291 containerd[1458]: time="2024-07-02T00:24:48.513477668Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:48.520771 containerd[1458]: time="2024-07-02T00:24:48.519956681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:24:48.522947 containerd[1458]: time="2024-07-02T00:24:48.522873782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:24:48.526811 containerd[1458]: time="2024-07-02T00:24:48.524515736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:48.530807 containerd[1458]: time="2024-07-02T00:24:48.528036282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.206304838s" Jul 2 00:24:48.536939 containerd[1458]: time="2024-07-02T00:24:48.536788066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.219702468s" Jul 2 00:24:48.549817 containerd[1458]: time="2024-07-02T00:24:48.547667803Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.243515553s" Jul 2 00:24:48.634512 kubelet[2274]: E0702 00:24:48.634345 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.122:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.122:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de3da4df481952 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 00:24:46.032083282 +0000 UTC m=+0.442872698,LastTimestamp:2024-07-02 00:24:46.032083282 +0000 UTC m=+0.442872698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 00:24:48.861563 containerd[1458]: time="2024-07-02T00:24:48.860347861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:48.861563 containerd[1458]: time="2024-07-02T00:24:48.860431205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:48.861563 containerd[1458]: time="2024-07-02T00:24:48.860528174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:48.861563 containerd[1458]: time="2024-07-02T00:24:48.860553240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:48.862636 containerd[1458]: time="2024-07-02T00:24:48.862238906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:48.862636 containerd[1458]: time="2024-07-02T00:24:48.862328030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:48.862636 containerd[1458]: time="2024-07-02T00:24:48.862366732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:48.862636 containerd[1458]: time="2024-07-02T00:24:48.862388573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:48.877226 containerd[1458]: time="2024-07-02T00:24:48.876548621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:48.877226 containerd[1458]: time="2024-07-02T00:24:48.876739834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:48.877226 containerd[1458]: time="2024-07-02T00:24:48.876825843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:48.877226 containerd[1458]: time="2024-07-02T00:24:48.876904649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:48.926304 systemd[1]: Started cri-containerd-1e9b048d83051e361d352364f6be35305e1ef2348c155dca2240ed32225e708c.scope - libcontainer container 1e9b048d83051e361d352364f6be35305e1ef2348c155dca2240ed32225e708c. Jul 2 00:24:48.929908 systemd[1]: Started cri-containerd-f223e005343eda41a908b984bde9747ec3bbb2a1c0402c800767a07af7852d4c.scope - libcontainer container f223e005343eda41a908b984bde9747ec3bbb2a1c0402c800767a07af7852d4c. Jul 2 00:24:48.939053 systemd[1]: Started cri-containerd-69a436fcdc154c946f90a9c389259555fa1d4d0b35838afe92e43f7a90c9c5c0.scope - libcontainer container 69a436fcdc154c946f90a9c389259555fa1d4d0b35838afe92e43f7a90c9c5c0. Jul 2 00:24:49.012789 containerd[1458]: time="2024-07-02T00:24:49.010375787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1d4b0e2cd0b1da6c526067ab8c5f910e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f223e005343eda41a908b984bde9747ec3bbb2a1c0402c800767a07af7852d4c\"" Jul 2 00:24:49.015686 kubelet[2274]: E0702 00:24:49.015533 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:49.043931 kubelet[2274]: E0702 00:24:49.043768 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="3.2s" Jul 2 00:24:49.046602 containerd[1458]: time="2024-07-02T00:24:49.046556484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e9b048d83051e361d352364f6be35305e1ef2348c155dca2240ed32225e708c\"" Jul 2 00:24:49.048688 containerd[1458]: time="2024-07-02T00:24:49.048650027Z" level=info msg="CreateContainer within sandbox \"f223e005343eda41a908b984bde9747ec3bbb2a1c0402c800767a07af7852d4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:24:49.049076 kubelet[2274]: E0702 00:24:49.049047 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:49.051350 containerd[1458]: time="2024-07-02T00:24:49.050296763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"69a436fcdc154c946f90a9c389259555fa1d4d0b35838afe92e43f7a90c9c5c0\"" Jul 2 00:24:49.051723 kubelet[2274]: E0702 00:24:49.050652 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:49.061131 containerd[1458]: time="2024-07-02T00:24:49.058134161Z" level=info msg="CreateContainer within sandbox \"1e9b048d83051e361d352364f6be35305e1ef2348c155dca2240ed32225e708c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:24:49.061131 containerd[1458]: time="2024-07-02T00:24:49.058383362Z" level=info msg="CreateContainer within sandbox \"69a436fcdc154c946f90a9c389259555fa1d4d0b35838afe92e43f7a90c9c5c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:24:49.153456 kubelet[2274]: I0702 00:24:49.152226 2274 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:24:49.154487 kubelet[2274]: E0702 00:24:49.154414 2274 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jul 2 00:24:49.526220 kubelet[2274]: W0702 00:24:49.526045 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:49.526220 kubelet[2274]: E0702 00:24:49.526126 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:49.999090 kubelet[2274]: W0702 00:24:49.998968 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:49.999090 kubelet[2274]: E0702 00:24:49.999059 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:50.326155 kubelet[2274]: W0702 00:24:50.325914 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:50.326155 kubelet[2274]: E0702 00:24:50.326014 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:50.440173 kubelet[2274]: W0702 00:24:50.440098 2274 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:50.440173 kubelet[2274]: E0702 00:24:50.440167 2274 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 2 00:24:50.535150 containerd[1458]: time="2024-07-02T00:24:50.535078103Z" level=info msg="CreateContainer within sandbox \"69a436fcdc154c946f90a9c389259555fa1d4d0b35838afe92e43f7a90c9c5c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"acded8b7a2b92beb147a71aab088db3336d927dd4c2cffca86aa927d3cf8bac7\"" Jul 2 00:24:50.535845 containerd[1458]: time="2024-07-02T00:24:50.535820498Z" level=info msg="StartContainer for \"acded8b7a2b92beb147a71aab088db3336d927dd4c2cffca86aa927d3cf8bac7\"" Jul 2 00:24:50.572031 systemd[1]: Started cri-containerd-acded8b7a2b92beb147a71aab088db3336d927dd4c2cffca86aa927d3cf8bac7.scope - libcontainer container acded8b7a2b92beb147a71aab088db3336d927dd4c2cffca86aa927d3cf8bac7. Jul 2 00:24:50.807638 containerd[1458]: time="2024-07-02T00:24:50.807564335Z" level=info msg="StartContainer for \"acded8b7a2b92beb147a71aab088db3336d927dd4c2cffca86aa927d3cf8bac7\" returns successfully" Jul 2 00:24:50.821040 containerd[1458]: time="2024-07-02T00:24:50.820989512Z" level=info msg="CreateContainer within sandbox \"1e9b048d83051e361d352364f6be35305e1ef2348c155dca2240ed32225e708c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7008c6e26bf0e2bd6c5342ae601d18ccc08b0d2db89df8c3a092c8ffe34c3b89\"" Jul 2 00:24:50.821551 containerd[1458]: time="2024-07-02T00:24:50.821486672Z" level=info msg="StartContainer for \"7008c6e26bf0e2bd6c5342ae601d18ccc08b0d2db89df8c3a092c8ffe34c3b89\"" Jul 2 00:24:50.825683 containerd[1458]: time="2024-07-02T00:24:50.825631656Z" level=info msg="CreateContainer within sandbox \"f223e005343eda41a908b984bde9747ec3bbb2a1c0402c800767a07af7852d4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d9497ddb2e681e62d90db425d90d8a7f8665bf08992557b7d726df6309c6c66b\"" Jul 2 00:24:50.827989 containerd[1458]: time="2024-07-02T00:24:50.826113758Z" level=info msg="StartContainer for \"d9497ddb2e681e62d90db425d90d8a7f8665bf08992557b7d726df6309c6c66b\"" Jul 2 00:24:50.856064 systemd[1]: Started cri-containerd-7008c6e26bf0e2bd6c5342ae601d18ccc08b0d2db89df8c3a092c8ffe34c3b89.scope - libcontainer container 7008c6e26bf0e2bd6c5342ae601d18ccc08b0d2db89df8c3a092c8ffe34c3b89. Jul 2 00:24:50.860715 systemd[1]: Started cri-containerd-d9497ddb2e681e62d90db425d90d8a7f8665bf08992557b7d726df6309c6c66b.scope - libcontainer container d9497ddb2e681e62d90db425d90d8a7f8665bf08992557b7d726df6309c6c66b. Jul 2 00:24:51.015631 containerd[1458]: time="2024-07-02T00:24:51.015496282Z" level=info msg="StartContainer for \"d9497ddb2e681e62d90db425d90d8a7f8665bf08992557b7d726df6309c6c66b\" returns successfully" Jul 2 00:24:51.015631 containerd[1458]: time="2024-07-02T00:24:51.015494659Z" level=info msg="StartContainer for \"7008c6e26bf0e2bd6c5342ae601d18ccc08b0d2db89df8c3a092c8ffe34c3b89\" returns successfully" Jul 2 00:24:51.820985 kubelet[2274]: E0702 00:24:51.820831 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:51.825303 kubelet[2274]: E0702 00:24:51.825203 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:51.825730 kubelet[2274]: E0702 00:24:51.825630 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:52.356395 kubelet[2274]: I0702 00:24:52.356353 2274 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:24:52.371417 kubelet[2274]: I0702 00:24:52.371380 2274 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:24:52.497117 kubelet[2274]: E0702 00:24:52.496811 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:52.597136 kubelet[2274]: E0702 00:24:52.597080 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:52.698009 kubelet[2274]: E0702 00:24:52.697945 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:52.798742 kubelet[2274]: E0702 00:24:52.798668 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:52.827152 kubelet[2274]: E0702 00:24:52.827112 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:52.827619 kubelet[2274]: E0702 00:24:52.827433 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:52.827619 kubelet[2274]: E0702 00:24:52.827594 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:52.899728 kubelet[2274]: E0702 00:24:52.899675 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.000761 kubelet[2274]: E0702 00:24:53.000599 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.101363 kubelet[2274]: E0702 00:24:53.101307 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.202061 kubelet[2274]: E0702 00:24:53.201951 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.302667 kubelet[2274]: E0702 00:24:53.302519 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.403242 kubelet[2274]: E0702 00:24:53.403183 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.504070 kubelet[2274]: E0702 00:24:53.504016 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.604730 kubelet[2274]: E0702 00:24:53.604599 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.705368 kubelet[2274]: E0702 00:24:53.705296 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.805994 kubelet[2274]: E0702 00:24:53.805915 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:53.828562 kubelet[2274]: E0702 00:24:53.828531 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:53.906789 kubelet[2274]: E0702 00:24:53.906736 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.007831 kubelet[2274]: E0702 00:24:54.007761 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.108396 kubelet[2274]: E0702 00:24:54.108341 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.209333 kubelet[2274]: E0702 00:24:54.209181 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.309970 kubelet[2274]: E0702 00:24:54.309914 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.410587 kubelet[2274]: E0702 00:24:54.410524 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.511573 kubelet[2274]: E0702 00:24:54.511403 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.611993 kubelet[2274]: E0702 00:24:54.611928 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.712800 kubelet[2274]: E0702 00:24:54.712737 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.813556 kubelet[2274]: E0702 00:24:54.813378 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:54.914062 kubelet[2274]: E0702 00:24:54.914007 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.014134 kubelet[2274]: E0702 00:24:55.014091 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.114949 kubelet[2274]: E0702 00:24:55.114602 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.215258 kubelet[2274]: E0702 00:24:55.215194 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.316022 kubelet[2274]: E0702 00:24:55.315986 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.416591 kubelet[2274]: E0702 00:24:55.416553 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.517461 kubelet[2274]: E0702 00:24:55.517408 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.618323 kubelet[2274]: E0702 00:24:55.618267 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.719520 kubelet[2274]: E0702 00:24:55.719355 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.819994 kubelet[2274]: E0702 00:24:55.819931 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:55.920180 kubelet[2274]: E0702 00:24:55.920105 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:56.020938 kubelet[2274]: E0702 00:24:56.020723 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:56.121408 kubelet[2274]: E0702 00:24:56.121328 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:56.222186 kubelet[2274]: E0702 00:24:56.222128 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:56.323073 kubelet[2274]: E0702 00:24:56.322918 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:56.423522 kubelet[2274]: E0702 00:24:56.423469 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:56.524382 kubelet[2274]: E0702 00:24:56.524343 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:56.625150 kubelet[2274]: E0702 00:24:56.624959 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:56.725881 kubelet[2274]: E0702 00:24:56.725801 2274 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:24:57.002666 kubelet[2274]: I0702 00:24:57.002585 2274 apiserver.go:52] "Watching apiserver" Jul 2 00:24:57.038249 kubelet[2274]: I0702 00:24:57.038200 2274 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:24:58.045361 systemd[1]: Reloading requested from client PID 2554 ('systemctl') (unit session-9.scope)... Jul 2 00:24:58.045379 systemd[1]: Reloading... Jul 2 00:24:58.139981 zram_generator::config[2594]: No configuration found. Jul 2 00:24:58.288640 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:24:58.398788 systemd[1]: Reloading finished in 352 ms. Jul 2 00:24:58.453681 kubelet[2274]: I0702 00:24:58.453504 2274 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:24:58.453598 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:58.477099 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:24:58.477530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:58.477597 systemd[1]: kubelet.service: Consumed 1.100s CPU time, 116.5M memory peak, 0B memory swap peak. Jul 2 00:24:58.487352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:58.693233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:58.699536 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:24:58.744303 kubelet[2636]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:58.744303 kubelet[2636]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:24:58.744303 kubelet[2636]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:58.744796 kubelet[2636]: I0702 00:24:58.744389 2636 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:24:58.750080 kubelet[2636]: I0702 00:24:58.750026 2636 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:24:58.750080 kubelet[2636]: I0702 00:24:58.750059 2636 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:24:58.750296 kubelet[2636]: I0702 00:24:58.750284 2636 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:24:58.751601 kubelet[2636]: I0702 00:24:58.751573 2636 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:24:58.752778 kubelet[2636]: I0702 00:24:58.752735 2636 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:24:58.762755 kubelet[2636]: I0702 00:24:58.762073 2636 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:24:58.762755 kubelet[2636]: I0702 00:24:58.762345 2636 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:24:58.762755 kubelet[2636]: I0702 00:24:58.762374 2636 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:24:58.762755 kubelet[2636]: I0702 00:24:58.762545 2636 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:24:58.764125 kubelet[2636]: I0702 00:24:58.762555 2636 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:24:58.764125 kubelet[2636]: I0702 00:24:58.762597 2636 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:58.764125 kubelet[2636]: I0702 00:24:58.762720 2636 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:24:58.764125 kubelet[2636]: I0702 00:24:58.762735 2636 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:24:58.764125 kubelet[2636]: I0702 00:24:58.762767 2636 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:24:58.764125 kubelet[2636]: I0702 00:24:58.762788 2636 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:24:58.766922 kubelet[2636]: I0702 00:24:58.765976 2636 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:24:58.766922 kubelet[2636]: I0702 00:24:58.766491 2636 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:24:58.767080 kubelet[2636]: I0702 00:24:58.767067 2636 server.go:1264] "Started kubelet" Jul 2 00:24:58.767567 kubelet[2636]: I0702 00:24:58.767514 2636 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:24:58.767756 kubelet[2636]: I0702 00:24:58.767711 2636 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:24:58.767792 kubelet[2636]: I0702 00:24:58.767773 2636 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:24:58.768457 kubelet[2636]: I0702 00:24:58.768440 2636 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:24:58.768710 kubelet[2636]: I0702 00:24:58.768681 2636 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:24:58.773675 kubelet[2636]: I0702 00:24:58.773637 2636 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:24:58.775627 kubelet[2636]: I0702 00:24:58.775453 2636 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:24:58.776890 kubelet[2636]: I0702 00:24:58.776568 2636 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:24:58.778115 kubelet[2636]: E0702 00:24:58.778083 2636 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:24:58.778778 kubelet[2636]: I0702 00:24:58.778753 2636 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:24:58.778883 kubelet[2636]: I0702 00:24:58.778842 2636 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:24:58.779951 kubelet[2636]: I0702 00:24:58.779929 2636 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:24:58.788917 kubelet[2636]: I0702 00:24:58.788780 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:24:58.791672 kubelet[2636]: I0702 00:24:58.791637 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:24:58.791719 kubelet[2636]: I0702 00:24:58.791697 2636 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:24:58.791753 kubelet[2636]: I0702 00:24:58.791725 2636 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:24:58.791879 kubelet[2636]: E0702 00:24:58.791781 2636 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:24:58.814988 kubelet[2636]: I0702 00:24:58.814950 2636 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:24:58.814988 kubelet[2636]: I0702 00:24:58.814975 2636 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:24:58.815097 kubelet[2636]: I0702 00:24:58.814999 2636 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:58.815179 kubelet[2636]: I0702 00:24:58.815150 2636 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:24:58.815179 kubelet[2636]: I0702 00:24:58.815167 2636 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:24:58.815179 kubelet[2636]: I0702 00:24:58.815186 2636 policy_none.go:49] "None policy: Start" Jul 2 00:24:58.815762 kubelet[2636]: I0702 00:24:58.815741 2636 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:24:58.815830 kubelet[2636]: I0702 00:24:58.815770 2636 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:24:58.815969 kubelet[2636]: I0702 00:24:58.815953 2636 state_mem.go:75] "Updated machine memory state" Jul 2 00:24:58.820530 kubelet[2636]: I0702 00:24:58.820383 2636 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:24:58.820664 kubelet[2636]: I0702 00:24:58.820595 2636 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:24:58.820735 kubelet[2636]: I0702 00:24:58.820712 2636 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:24:58.878738 kubelet[2636]: I0702 00:24:58.878698 2636 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:24:58.892963 kubelet[2636]: I0702 00:24:58.892912 2636 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:24:58.893077 kubelet[2636]: I0702 00:24:58.893009 2636 topology_manager.go:215] "Topology Admit Handler" podUID="1d4b0e2cd0b1da6c526067ab8c5f910e" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:24:58.893105 kubelet[2636]: I0702 00:24:58.893075 2636 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:24:58.979942 kubelet[2636]: I0702 00:24:58.978936 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d4b0e2cd0b1da6c526067ab8c5f910e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1d4b0e2cd0b1da6c526067ab8c5f910e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:24:58.979942 kubelet[2636]: I0702 00:24:58.978984 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d4b0e2cd0b1da6c526067ab8c5f910e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1d4b0e2cd0b1da6c526067ab8c5f910e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:24:58.979942 kubelet[2636]: I0702 00:24:58.979006 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:58.979942 kubelet[2636]: I0702 00:24:58.979021 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d4b0e2cd0b1da6c526067ab8c5f910e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1d4b0e2cd0b1da6c526067ab8c5f910e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:24:58.979942 kubelet[2636]: I0702 00:24:58.979039 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:58.980336 kubelet[2636]: I0702 00:24:58.979054 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:58.980336 kubelet[2636]: I0702 00:24:58.979071 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:58.980336 kubelet[2636]: I0702 00:24:58.979088 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:24:58.980336 kubelet[2636]: I0702 00:24:58.979104 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:24:59.392465 kubelet[2636]: E0702 00:24:59.392278 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:59.392465 kubelet[2636]: E0702 00:24:59.392382 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:59.392791 kubelet[2636]: E0702 00:24:59.392566 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:59.763388 kubelet[2636]: I0702 00:24:59.763225 2636 apiserver.go:52] "Watching apiserver" Jul 2 00:24:59.776591 kubelet[2636]: I0702 00:24:59.776542 2636 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:24:59.804551 kubelet[2636]: E0702 00:24:59.804480 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:59.804783 kubelet[2636]: E0702 00:24:59.804664 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:59.805145 kubelet[2636]: E0702 00:24:59.805125 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:24:59.989829 kubelet[2636]: I0702 00:24:59.989657 2636 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 00:24:59.989829 kubelet[2636]: I0702 00:24:59.989778 2636 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:25:00.806038 kubelet[2636]: E0702 00:25:00.805988 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:01.134971 kubelet[2636]: I0702 00:25:01.134905 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.134873084 podStartE2EDuration="2.134873084s" podCreationTimestamp="2024-07-02 00:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:25:00.640036109 +0000 UTC m=+1.936007554" watchObservedRunningTime="2024-07-02 00:25:01.134873084 +0000 UTC m=+2.430844529" Jul 2 00:25:01.278701 kubelet[2636]: E0702 00:25:01.278593 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:01.806976 kubelet[2636]: E0702 00:25:01.806919 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:01.963886 kubelet[2636]: I0702 00:25:01.961950 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.961922645 podStartE2EDuration="2.961922645s" podCreationTimestamp="2024-07-02 00:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:25:01.135148087 +0000 UTC m=+2.431119532" watchObservedRunningTime="2024-07-02 00:25:01.961922645 +0000 UTC m=+3.257894090" Jul 2 00:25:02.746330 kubelet[2636]: I0702 00:25:02.746253 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.746231551 podStartE2EDuration="3.746231551s" podCreationTimestamp="2024-07-02 00:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:25:01.963022334 +0000 UTC m=+3.258993779" watchObservedRunningTime="2024-07-02 00:25:02.746231551 +0000 UTC m=+4.042202996" Jul 2 00:25:03.989928 kubelet[2636]: E0702 00:25:03.989881 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:04.810173 kubelet[2636]: E0702 00:25:04.810133 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:05.710898 kubelet[2636]: E0702 00:25:05.710821 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:05.813887 kubelet[2636]: E0702 00:25:05.811997 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:06.813823 kubelet[2636]: E0702 00:25:06.813759 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:12.209379 sudo[1653]: pam_unix(sudo:session): session closed for user root Jul 2 00:25:12.214516 sshd[1650]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:12.225743 systemd[1]: sshd@8-10.0.0.122:22-10.0.0.1:57302.service: Deactivated successfully. Jul 2 00:25:12.227916 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:25:12.228115 systemd[1]: session-9.scope: Consumed 4.767s CPU time, 141.7M memory peak, 0B memory swap peak. Jul 2 00:25:12.228661 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:25:12.229757 systemd-logind[1441]: Removed session 9. Jul 2 00:25:13.512156 kubelet[2636]: I0702 00:25:13.512031 2636 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:25:13.512737 containerd[1458]: time="2024-07-02T00:25:13.512563407Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:25:13.513128 kubelet[2636]: I0702 00:25:13.512848 2636 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:25:14.447238 kubelet[2636]: I0702 00:25:14.445203 2636 topology_manager.go:215] "Topology Admit Handler" podUID="bb67fdda-db61-4610-aa96-28f69b42c4f0" podNamespace="kube-system" podName="kube-proxy-bhbpb" Jul 2 00:25:14.485345 systemd[1]: Created slice kubepods-besteffort-podbb67fdda_db61_4610_aa96_28f69b42c4f0.slice - libcontainer container kubepods-besteffort-podbb67fdda_db61_4610_aa96_28f69b42c4f0.slice. Jul 2 00:25:14.578409 kubelet[2636]: I0702 00:25:14.578083 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb67fdda-db61-4610-aa96-28f69b42c4f0-xtables-lock\") pod \"kube-proxy-bhbpb\" (UID: \"bb67fdda-db61-4610-aa96-28f69b42c4f0\") " pod="kube-system/kube-proxy-bhbpb" Jul 2 00:25:14.578409 kubelet[2636]: I0702 00:25:14.578133 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb67fdda-db61-4610-aa96-28f69b42c4f0-kube-proxy\") pod \"kube-proxy-bhbpb\" (UID: \"bb67fdda-db61-4610-aa96-28f69b42c4f0\") " pod="kube-system/kube-proxy-bhbpb" Jul 2 00:25:14.578409 kubelet[2636]: I0702 00:25:14.578169 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb67fdda-db61-4610-aa96-28f69b42c4f0-lib-modules\") pod \"kube-proxy-bhbpb\" (UID: \"bb67fdda-db61-4610-aa96-28f69b42c4f0\") " pod="kube-system/kube-proxy-bhbpb" Jul 2 00:25:14.578409 kubelet[2636]: I0702 00:25:14.578189 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp554\" (UniqueName: \"kubernetes.io/projected/bb67fdda-db61-4610-aa96-28f69b42c4f0-kube-api-access-rp554\") pod \"kube-proxy-bhbpb\" (UID: \"bb67fdda-db61-4610-aa96-28f69b42c4f0\") " pod="kube-system/kube-proxy-bhbpb" Jul 2 00:25:14.689035 kubelet[2636]: I0702 00:25:14.688967 2636 topology_manager.go:215] "Topology Admit Handler" podUID="3d822179-b824-4d47-831f-401bff224fcf" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-qz4nj" Jul 2 00:25:14.720049 systemd[1]: Created slice kubepods-besteffort-pod3d822179_b824_4d47_831f_401bff224fcf.slice - libcontainer container kubepods-besteffort-pod3d822179_b824_4d47_831f_401bff224fcf.slice. Jul 2 00:25:14.797429 kubelet[2636]: I0702 00:25:14.794627 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fb48\" (UniqueName: \"kubernetes.io/projected/3d822179-b824-4d47-831f-401bff224fcf-kube-api-access-8fb48\") pod \"tigera-operator-76ff79f7fd-qz4nj\" (UID: \"3d822179-b824-4d47-831f-401bff224fcf\") " pod="tigera-operator/tigera-operator-76ff79f7fd-qz4nj" Jul 2 00:25:14.797429 kubelet[2636]: I0702 00:25:14.794672 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d822179-b824-4d47-831f-401bff224fcf-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-qz4nj\" (UID: \"3d822179-b824-4d47-831f-401bff224fcf\") " pod="tigera-operator/tigera-operator-76ff79f7fd-qz4nj" Jul 2 00:25:14.822756 kubelet[2636]: E0702 00:25:14.822682 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:14.824966 containerd[1458]: time="2024-07-02T00:25:14.824420995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhbpb,Uid:bb67fdda-db61-4610-aa96-28f69b42c4f0,Namespace:kube-system,Attempt:0,}" Jul 2 00:25:14.947993 containerd[1458]: time="2024-07-02T00:25:14.943921275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:14.947993 containerd[1458]: time="2024-07-02T00:25:14.943996769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:14.947993 containerd[1458]: time="2024-07-02T00:25:14.944019583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:14.947993 containerd[1458]: time="2024-07-02T00:25:14.944033089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:15.008605 systemd[1]: Started cri-containerd-3813d033a5b70272e937a74e7b989d45094e2fe0a5ad8f597a88368c780625bb.scope - libcontainer container 3813d033a5b70272e937a74e7b989d45094e2fe0a5ad8f597a88368c780625bb. Jul 2 00:25:15.026436 containerd[1458]: time="2024-07-02T00:25:15.025933449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-qz4nj,Uid:3d822179-b824-4d47-831f-401bff224fcf,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:25:15.075433 containerd[1458]: time="2024-07-02T00:25:15.075165839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhbpb,Uid:bb67fdda-db61-4610-aa96-28f69b42c4f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3813d033a5b70272e937a74e7b989d45094e2fe0a5ad8f597a88368c780625bb\"" Jul 2 00:25:15.081187 kubelet[2636]: E0702 00:25:15.081155 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:15.084339 containerd[1458]: time="2024-07-02T00:25:15.084040889Z" level=info msg="CreateContainer within sandbox \"3813d033a5b70272e937a74e7b989d45094e2fe0a5ad8f597a88368c780625bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:25:15.131746 containerd[1458]: time="2024-07-02T00:25:15.130638689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:15.131746 containerd[1458]: time="2024-07-02T00:25:15.130791032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:15.131746 containerd[1458]: time="2024-07-02T00:25:15.130920291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:15.131746 containerd[1458]: time="2024-07-02T00:25:15.130977631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:15.186086 systemd[1]: Started cri-containerd-7cdd173a25389bce0a0bcedc8ddf1ca0ff0e6832bc470d217347d580c7109c61.scope - libcontainer container 7cdd173a25389bce0a0bcedc8ddf1ca0ff0e6832bc470d217347d580c7109c61. Jul 2 00:25:15.212099 containerd[1458]: time="2024-07-02T00:25:15.212025662Z" level=info msg="CreateContainer within sandbox \"3813d033a5b70272e937a74e7b989d45094e2fe0a5ad8f597a88368c780625bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fff5d0a258456d35170f183b84ffcf16349d9231e2bb41e4585a5c9e14cc6745\"" Jul 2 00:25:15.217335 containerd[1458]: time="2024-07-02T00:25:15.215676889Z" level=info msg="StartContainer for \"fff5d0a258456d35170f183b84ffcf16349d9231e2bb41e4585a5c9e14cc6745\"" Jul 2 00:25:15.238928 containerd[1458]: time="2024-07-02T00:25:15.238690720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-qz4nj,Uid:3d822179-b824-4d47-831f-401bff224fcf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7cdd173a25389bce0a0bcedc8ddf1ca0ff0e6832bc470d217347d580c7109c61\"" Jul 2 00:25:15.241668 containerd[1458]: time="2024-07-02T00:25:15.241499004Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:25:15.259236 systemd[1]: Started cri-containerd-fff5d0a258456d35170f183b84ffcf16349d9231e2bb41e4585a5c9e14cc6745.scope - libcontainer container fff5d0a258456d35170f183b84ffcf16349d9231e2bb41e4585a5c9e14cc6745. Jul 2 00:25:15.302884 containerd[1458]: time="2024-07-02T00:25:15.302780765Z" level=info msg="StartContainer for \"fff5d0a258456d35170f183b84ffcf16349d9231e2bb41e4585a5c9e14cc6745\" returns successfully" Jul 2 00:25:15.845815 kubelet[2636]: E0702 00:25:15.845767 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:15.854972 kubelet[2636]: I0702 00:25:15.854664 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bhbpb" podStartSLOduration=1.8546377600000001 podStartE2EDuration="1.85463776s" podCreationTimestamp="2024-07-02 00:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:25:15.85446664 +0000 UTC m=+17.150438085" watchObservedRunningTime="2024-07-02 00:25:15.85463776 +0000 UTC m=+17.150609205" Jul 2 00:25:16.716352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267809564.mount: Deactivated successfully. Jul 2 00:25:18.574651 containerd[1458]: time="2024-07-02T00:25:18.574566327Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:18.602833 containerd[1458]: time="2024-07-02T00:25:18.602738857Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076088" Jul 2 00:25:18.684584 containerd[1458]: time="2024-07-02T00:25:18.684491223Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:18.726157 containerd[1458]: time="2024-07-02T00:25:18.726101192Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:18.727324 containerd[1458]: time="2024-07-02T00:25:18.727284053Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.485713882s" Jul 2 00:25:18.727324 containerd[1458]: time="2024-07-02T00:25:18.727310906Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:25:18.729796 containerd[1458]: time="2024-07-02T00:25:18.729767273Z" level=info msg="CreateContainer within sandbox \"7cdd173a25389bce0a0bcedc8ddf1ca0ff0e6832bc470d217347d580c7109c61\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:25:19.670548 containerd[1458]: time="2024-07-02T00:25:19.670454603Z" level=info msg="CreateContainer within sandbox \"7cdd173a25389bce0a0bcedc8ddf1ca0ff0e6832bc470d217347d580c7109c61\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5795dd8028b71f171124972aae351aa31331b480b44d1476b73946f82227c1c\"" Jul 2 00:25:19.671314 containerd[1458]: time="2024-07-02T00:25:19.671273335Z" level=info msg="StartContainer for \"a5795dd8028b71f171124972aae351aa31331b480b44d1476b73946f82227c1c\"" Jul 2 00:25:19.711121 systemd[1]: Started cri-containerd-a5795dd8028b71f171124972aae351aa31331b480b44d1476b73946f82227c1c.scope - libcontainer container a5795dd8028b71f171124972aae351aa31331b480b44d1476b73946f82227c1c. Jul 2 00:25:19.914172 containerd[1458]: time="2024-07-02T00:25:19.914103208Z" level=info msg="StartContainer for \"a5795dd8028b71f171124972aae351aa31331b480b44d1476b73946f82227c1c\" returns successfully" Jul 2 00:25:22.703430 kubelet[2636]: I0702 00:25:22.703126 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-qz4nj" podStartSLOduration=5.214843201 podStartE2EDuration="8.703097918s" podCreationTimestamp="2024-07-02 00:25:14 +0000 UTC" firstStartedPulling="2024-07-02 00:25:15.240258227 +0000 UTC m=+16.536229672" lastFinishedPulling="2024-07-02 00:25:18.728512944 +0000 UTC m=+20.024484389" observedRunningTime="2024-07-02 00:25:20.92531347 +0000 UTC m=+22.221284945" watchObservedRunningTime="2024-07-02 00:25:22.703097918 +0000 UTC m=+23.999069363" Jul 2 00:25:22.703430 kubelet[2636]: I0702 00:25:22.703378 2636 topology_manager.go:215] "Topology Admit Handler" podUID="d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73" podNamespace="calico-system" podName="calico-typha-868b58b984-lgvkj" Jul 2 00:25:22.721093 systemd[1]: Created slice kubepods-besteffort-podd89bbb42_0c74_4c7c_96e1_c1c5ddbc9f73.slice - libcontainer container kubepods-besteffort-podd89bbb42_0c74_4c7c_96e1_c1c5ddbc9f73.slice. Jul 2 00:25:22.751408 kubelet[2636]: I0702 00:25:22.751303 2636 topology_manager.go:215] "Topology Admit Handler" podUID="5144195b-1e6c-4149-a83d-a7e9f00c1f70" podNamespace="calico-system" podName="calico-node-28ktt" Jul 2 00:25:22.761234 systemd[1]: Created slice kubepods-besteffort-pod5144195b_1e6c_4149_a83d_a7e9f00c1f70.slice - libcontainer container kubepods-besteffort-pod5144195b_1e6c_4149_a83d_a7e9f00c1f70.slice. Jul 2 00:25:22.870672 kubelet[2636]: I0702 00:25:22.870588 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-var-run-calico\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.870672 kubelet[2636]: I0702 00:25:22.870661 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-net-dir\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.870672 kubelet[2636]: I0702 00:25:22.870685 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-lib-modules\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.871001 kubelet[2636]: I0702 00:25:22.870709 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-tigera-ca-bundle\") pod \"calico-typha-868b58b984-lgvkj\" (UID: \"d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73\") " pod="calico-system/calico-typha-868b58b984-lgvkj" Jul 2 00:25:22.871001 kubelet[2636]: I0702 00:25:22.870734 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-typha-certs\") pod \"calico-typha-868b58b984-lgvkj\" (UID: \"d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73\") " pod="calico-system/calico-typha-868b58b984-lgvkj" Jul 2 00:25:22.871001 kubelet[2636]: I0702 00:25:22.870758 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lglc\" (UniqueName: \"kubernetes.io/projected/5144195b-1e6c-4149-a83d-a7e9f00c1f70-kube-api-access-7lglc\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.871001 kubelet[2636]: I0702 00:25:22.870779 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-var-lib-calico\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.871001 kubelet[2636]: I0702 00:25:22.870798 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-log-dir\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.871216 kubelet[2636]: I0702 00:25:22.870817 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-policysync\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.871216 kubelet[2636]: I0702 00:25:22.870832 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5144195b-1e6c-4149-a83d-a7e9f00c1f70-tigera-ca-bundle\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.871216 kubelet[2636]: I0702 00:25:22.870846 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-bin-dir\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.871216 kubelet[2636]: I0702 00:25:22.870918 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-flexvol-driver-host\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.871216 kubelet[2636]: I0702 00:25:22.870940 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjdl2\" (UniqueName: \"kubernetes.io/projected/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-kube-api-access-sjdl2\") pod \"calico-typha-868b58b984-lgvkj\" (UID: \"d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73\") " pod="calico-system/calico-typha-868b58b984-lgvkj" Jul 2 00:25:22.871399 kubelet[2636]: I0702 00:25:22.870972 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5144195b-1e6c-4149-a83d-a7e9f00c1f70-node-certs\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.871399 kubelet[2636]: I0702 00:25:22.870997 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-xtables-lock\") pod \"calico-node-28ktt\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " pod="calico-system/calico-node-28ktt" Jul 2 00:25:22.873726 kubelet[2636]: I0702 00:25:22.873656 2636 topology_manager.go:215] "Topology Admit Handler" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" podNamespace="calico-system" podName="csi-node-driver-7nzzg" Jul 2 00:25:22.874173 kubelet[2636]: E0702 00:25:22.874090 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:22.974620 kubelet[2636]: E0702 00:25:22.974416 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:22.974620 kubelet[2636]: W0702 00:25:22.974451 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:22.974620 kubelet[2636]: E0702 00:25:22.974495 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:22.978439 kubelet[2636]: E0702 00:25:22.978317 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:22.978439 kubelet[2636]: W0702 00:25:22.978346 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:22.978439 kubelet[2636]: E0702 00:25:22.978371 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:22.987897 kubelet[2636]: E0702 00:25:22.984958 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:22.987897 kubelet[2636]: W0702 00:25:22.984986 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:22.987897 kubelet[2636]: E0702 00:25:22.985025 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:22.987897 kubelet[2636]: E0702 00:25:22.985393 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:22.987897 kubelet[2636]: W0702 00:25:22.985405 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:22.987897 kubelet[2636]: E0702 00:25:22.985422 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:22.987897 kubelet[2636]: E0702 00:25:22.985659 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:22.987897 kubelet[2636]: W0702 00:25:22.985669 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:22.987897 kubelet[2636]: E0702 00:25:22.985680 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:22.988829 kubelet[2636]: E0702 00:25:22.988805 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:22.988829 kubelet[2636]: W0702 00:25:22.988826 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:22.989086 kubelet[2636]: E0702 00:25:22.988842 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.026361 kubelet[2636]: E0702 00:25:23.026320 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:23.026848 containerd[1458]: time="2024-07-02T00:25:23.026802235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-868b58b984-lgvkj,Uid:d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:23.065104 kubelet[2636]: E0702 00:25:23.065053 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:23.065771 containerd[1458]: time="2024-07-02T00:25:23.065676066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-28ktt,Uid:5144195b-1e6c-4149-a83d-a7e9f00c1f70,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:23.073214 kubelet[2636]: E0702 00:25:23.073153 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.073214 kubelet[2636]: W0702 00:25:23.073188 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.073214 kubelet[2636]: E0702 00:25:23.073219 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.073475 kubelet[2636]: I0702 00:25:23.073253 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3d03670-d650-4edf-88af-0fb85e858e8c-registration-dir\") pod \"csi-node-driver-7nzzg\" (UID: \"c3d03670-d650-4edf-88af-0fb85e858e8c\") " pod="calico-system/csi-node-driver-7nzzg" Jul 2 00:25:23.073632 kubelet[2636]: E0702 00:25:23.073604 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.073696 kubelet[2636]: W0702 00:25:23.073632 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.073696 kubelet[2636]: E0702 00:25:23.073674 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.073776 kubelet[2636]: I0702 00:25:23.073704 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqlmb\" (UniqueName: \"kubernetes.io/projected/c3d03670-d650-4edf-88af-0fb85e858e8c-kube-api-access-xqlmb\") pod \"csi-node-driver-7nzzg\" (UID: \"c3d03670-d650-4edf-88af-0fb85e858e8c\") " pod="calico-system/csi-node-driver-7nzzg" Jul 2 00:25:23.074067 kubelet[2636]: E0702 00:25:23.074047 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.074067 kubelet[2636]: W0702 00:25:23.074066 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.074173 kubelet[2636]: E0702 00:25:23.074090 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.074173 kubelet[2636]: I0702 00:25:23.074107 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3d03670-d650-4edf-88af-0fb85e858e8c-kubelet-dir\") pod \"csi-node-driver-7nzzg\" (UID: \"c3d03670-d650-4edf-88af-0fb85e858e8c\") " pod="calico-system/csi-node-driver-7nzzg" Jul 2 00:25:23.074544 kubelet[2636]: E0702 00:25:23.074366 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.074544 kubelet[2636]: W0702 00:25:23.074386 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.074544 kubelet[2636]: E0702 00:25:23.074406 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.074544 kubelet[2636]: I0702 00:25:23.074428 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c3d03670-d650-4edf-88af-0fb85e858e8c-varrun\") pod \"csi-node-driver-7nzzg\" (UID: \"c3d03670-d650-4edf-88af-0fb85e858e8c\") " pod="calico-system/csi-node-driver-7nzzg" Jul 2 00:25:23.074946 kubelet[2636]: E0702 00:25:23.074922 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.074946 kubelet[2636]: W0702 00:25:23.074940 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.075085 kubelet[2636]: E0702 00:25:23.075059 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.075134 kubelet[2636]: I0702 00:25:23.075090 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3d03670-d650-4edf-88af-0fb85e858e8c-socket-dir\") pod \"csi-node-driver-7nzzg\" (UID: \"c3d03670-d650-4edf-88af-0fb85e858e8c\") " pod="calico-system/csi-node-driver-7nzzg" Jul 2 00:25:23.075258 kubelet[2636]: E0702 00:25:23.075245 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.075258 kubelet[2636]: W0702 00:25:23.075256 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.075397 kubelet[2636]: E0702 00:25:23.075377 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.075498 kubelet[2636]: E0702 00:25:23.075487 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.075498 kubelet[2636]: W0702 00:25:23.075496 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.075590 kubelet[2636]: E0702 00:25:23.075537 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.075797 kubelet[2636]: E0702 00:25:23.075782 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.075797 kubelet[2636]: W0702 00:25:23.075795 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.075938 kubelet[2636]: E0702 00:25:23.075921 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.076058 kubelet[2636]: E0702 00:25:23.076044 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.076058 kubelet[2636]: W0702 00:25:23.076054 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.076191 kubelet[2636]: E0702 00:25:23.076171 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.076367 kubelet[2636]: E0702 00:25:23.076330 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.076367 kubelet[2636]: W0702 00:25:23.076345 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.076367 kubelet[2636]: E0702 00:25:23.076362 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.076609 kubelet[2636]: E0702 00:25:23.076581 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.076609 kubelet[2636]: W0702 00:25:23.076607 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.076698 kubelet[2636]: E0702 00:25:23.076619 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.076849 kubelet[2636]: E0702 00:25:23.076834 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.076849 kubelet[2636]: W0702 00:25:23.076846 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.076849 kubelet[2636]: E0702 00:25:23.076872 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.077127 kubelet[2636]: E0702 00:25:23.077108 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.077127 kubelet[2636]: W0702 00:25:23.077123 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.077232 kubelet[2636]: E0702 00:25:23.077134 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.077385 kubelet[2636]: E0702 00:25:23.077372 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.077385 kubelet[2636]: W0702 00:25:23.077385 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.077455 kubelet[2636]: E0702 00:25:23.077395 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.077614 kubelet[2636]: E0702 00:25:23.077601 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.077614 kubelet[2636]: W0702 00:25:23.077612 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.077614 kubelet[2636]: E0702 00:25:23.077622 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.176222 kubelet[2636]: E0702 00:25:23.176176 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.176222 kubelet[2636]: W0702 00:25:23.176206 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.176470 kubelet[2636]: E0702 00:25:23.176239 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.176651 kubelet[2636]: E0702 00:25:23.176613 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.176651 kubelet[2636]: W0702 00:25:23.176631 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.176651 kubelet[2636]: E0702 00:25:23.176645 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.176991 kubelet[2636]: E0702 00:25:23.176978 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.176991 kubelet[2636]: W0702 00:25:23.176990 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.177140 kubelet[2636]: E0702 00:25:23.177009 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.177282 kubelet[2636]: E0702 00:25:23.177269 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.177282 kubelet[2636]: W0702 00:25:23.177280 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.177348 kubelet[2636]: E0702 00:25:23.177298 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.177741 kubelet[2636]: E0702 00:25:23.177698 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.177741 kubelet[2636]: W0702 00:25:23.177736 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.177819 kubelet[2636]: E0702 00:25:23.177773 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.178080 kubelet[2636]: E0702 00:25:23.178061 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.178080 kubelet[2636]: W0702 00:25:23.178075 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.178151 kubelet[2636]: E0702 00:25:23.178093 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.178341 kubelet[2636]: E0702 00:25:23.178318 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.178341 kubelet[2636]: W0702 00:25:23.178329 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.178412 kubelet[2636]: E0702 00:25:23.178371 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.178573 kubelet[2636]: E0702 00:25:23.178562 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.178606 kubelet[2636]: W0702 00:25:23.178573 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.178632 kubelet[2636]: E0702 00:25:23.178612 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.178810 kubelet[2636]: E0702 00:25:23.178797 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.178810 kubelet[2636]: W0702 00:25:23.178808 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.178883 kubelet[2636]: E0702 00:25:23.178848 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.179134 kubelet[2636]: E0702 00:25:23.179117 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.179134 kubelet[2636]: W0702 00:25:23.179131 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.179226 kubelet[2636]: E0702 00:25:23.179187 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.179461 kubelet[2636]: E0702 00:25:23.179436 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.179507 kubelet[2636]: W0702 00:25:23.179458 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.179507 kubelet[2636]: E0702 00:25:23.179498 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.179730 kubelet[2636]: E0702 00:25:23.179713 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.179730 kubelet[2636]: W0702 00:25:23.179724 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.179805 kubelet[2636]: E0702 00:25:23.179751 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.179929 kubelet[2636]: E0702 00:25:23.179912 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.179929 kubelet[2636]: W0702 00:25:23.179923 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.180010 kubelet[2636]: E0702 00:25:23.179949 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.180122 kubelet[2636]: E0702 00:25:23.180106 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.180122 kubelet[2636]: W0702 00:25:23.180116 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.180188 kubelet[2636]: E0702 00:25:23.180134 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.180346 kubelet[2636]: E0702 00:25:23.180328 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.180346 kubelet[2636]: W0702 00:25:23.180341 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.180445 kubelet[2636]: E0702 00:25:23.180356 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.180610 kubelet[2636]: E0702 00:25:23.180589 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.180610 kubelet[2636]: W0702 00:25:23.180608 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.180685 kubelet[2636]: E0702 00:25:23.180662 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.180909 kubelet[2636]: E0702 00:25:23.180895 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.180938 kubelet[2636]: W0702 00:25:23.180908 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.180971 kubelet[2636]: E0702 00:25:23.180941 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.181195 kubelet[2636]: E0702 00:25:23.181181 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.181229 kubelet[2636]: W0702 00:25:23.181194 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.181254 kubelet[2636]: E0702 00:25:23.181227 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.181491 kubelet[2636]: E0702 00:25:23.181477 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.181524 kubelet[2636]: W0702 00:25:23.181490 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.181618 kubelet[2636]: E0702 00:25:23.181525 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.181735 kubelet[2636]: E0702 00:25:23.181721 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.181770 kubelet[2636]: W0702 00:25:23.181734 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.181796 kubelet[2636]: E0702 00:25:23.181772 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.182095 kubelet[2636]: E0702 00:25:23.182079 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.182095 kubelet[2636]: W0702 00:25:23.182094 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.182166 kubelet[2636]: E0702 00:25:23.182124 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.182342 kubelet[2636]: E0702 00:25:23.182328 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.182373 kubelet[2636]: W0702 00:25:23.182341 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.182373 kubelet[2636]: E0702 00:25:23.182367 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.182602 kubelet[2636]: E0702 00:25:23.182585 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.182602 kubelet[2636]: W0702 00:25:23.182600 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.182690 kubelet[2636]: E0702 00:25:23.182634 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.182894 kubelet[2636]: E0702 00:25:23.182846 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.182894 kubelet[2636]: W0702 00:25:23.182889 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.182990 kubelet[2636]: E0702 00:25:23.182910 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.280640 kubelet[2636]: E0702 00:25:23.280478 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.280640 kubelet[2636]: W0702 00:25:23.280508 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.280640 kubelet[2636]: E0702 00:25:23.280534 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.280883 kubelet[2636]: E0702 00:25:23.280781 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.280883 kubelet[2636]: W0702 00:25:23.280799 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.280883 kubelet[2636]: E0702 00:25:23.280811 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.299473 kubelet[2636]: E0702 00:25:23.299411 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.299473 kubelet[2636]: W0702 00:25:23.299448 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.299473 kubelet[2636]: E0702 00:25:23.299476 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.382432 kubelet[2636]: E0702 00:25:23.382379 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.382432 kubelet[2636]: W0702 00:25:23.382415 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.382432 kubelet[2636]: E0702 00:25:23.382443 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:23.461802 kubelet[2636]: E0702 00:25:23.461757 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:23.461802 kubelet[2636]: W0702 00:25:23.461790 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:23.461802 kubelet[2636]: E0702 00:25:23.461818 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:24.323014 containerd[1458]: time="2024-07-02T00:25:24.322716378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:24.323014 containerd[1458]: time="2024-07-02T00:25:24.322780310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:24.323014 containerd[1458]: time="2024-07-02T00:25:24.322797473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:24.323014 containerd[1458]: time="2024-07-02T00:25:24.322809976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:24.346821 containerd[1458]: time="2024-07-02T00:25:24.346254767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:24.346821 containerd[1458]: time="2024-07-02T00:25:24.346402931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:24.346821 containerd[1458]: time="2024-07-02T00:25:24.346671144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:24.346821 containerd[1458]: time="2024-07-02T00:25:24.346733354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:24.368109 systemd[1]: Started cri-containerd-231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484.scope - libcontainer container 231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484. Jul 2 00:25:24.376355 systemd[1]: Started cri-containerd-64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb.scope - libcontainer container 64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb. Jul 2 00:25:24.401208 containerd[1458]: time="2024-07-02T00:25:24.401161121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-28ktt,Uid:5144195b-1e6c-4149-a83d-a7e9f00c1f70,Namespace:calico-system,Attempt:0,} returns sandbox id \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\"" Jul 2 00:25:24.402059 kubelet[2636]: E0702 00:25:24.402028 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:24.403982 containerd[1458]: time="2024-07-02T00:25:24.403954227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:25:24.428787 containerd[1458]: time="2024-07-02T00:25:24.428643539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-868b58b984-lgvkj,Uid:d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73,Namespace:calico-system,Attempt:0,} returns sandbox id \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\"" Jul 2 00:25:24.429599 kubelet[2636]: E0702 00:25:24.429563 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:24.793143 kubelet[2636]: E0702 00:25:24.793068 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:26.525684 containerd[1458]: time="2024-07-02T00:25:26.525599833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:26.553800 containerd[1458]: time="2024-07-02T00:25:26.553702111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:25:26.600492 containerd[1458]: time="2024-07-02T00:25:26.600388691Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:26.631348 containerd[1458]: time="2024-07-02T00:25:26.631282416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:26.632266 containerd[1458]: time="2024-07-02T00:25:26.632215881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 2.228223882s" Jul 2 00:25:26.632266 containerd[1458]: time="2024-07-02T00:25:26.632265636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:25:26.633265 containerd[1458]: time="2024-07-02T00:25:26.633236342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:25:26.634383 containerd[1458]: time="2024-07-02T00:25:26.634352977Z" level=info msg="CreateContainer within sandbox \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:25:26.747144 containerd[1458]: time="2024-07-02T00:25:26.747083315Z" level=info msg="CreateContainer within sandbox \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\"" Jul 2 00:25:26.747793 containerd[1458]: time="2024-07-02T00:25:26.747715494Z" level=info msg="StartContainer for \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\"" Jul 2 00:25:26.788100 systemd[1]: Started cri-containerd-6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f.scope - libcontainer container 6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f. Jul 2 00:25:26.794022 kubelet[2636]: E0702 00:25:26.793287 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:26.847116 systemd[1]: cri-containerd-6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f.scope: Deactivated successfully. Jul 2 00:25:26.906019 containerd[1458]: time="2024-07-02T00:25:26.905941414Z" level=info msg="StartContainer for \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\" returns successfully" Jul 2 00:25:26.929668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f-rootfs.mount: Deactivated successfully. Jul 2 00:25:26.939159 containerd[1458]: time="2024-07-02T00:25:26.938749840Z" level=info msg="StopContainer for \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\" with timeout 5 (s)" Jul 2 00:25:27.019214 containerd[1458]: time="2024-07-02T00:25:27.019149578Z" level=info msg="Stop container \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\" with signal terminated" Jul 2 00:25:27.019985 containerd[1458]: time="2024-07-02T00:25:27.019922425Z" level=info msg="shim disconnected" id=6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f namespace=k8s.io Jul 2 00:25:27.019985 containerd[1458]: time="2024-07-02T00:25:27.019983702Z" level=warning msg="cleaning up after shim disconnected" id=6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f namespace=k8s.io Jul 2 00:25:27.020071 containerd[1458]: time="2024-07-02T00:25:27.019994734Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:27.043839 containerd[1458]: time="2024-07-02T00:25:27.043660788Z" level=info msg="StopContainer for \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\" returns successfully" Jul 2 00:25:27.044763 containerd[1458]: time="2024-07-02T00:25:27.044709602Z" level=info msg="StopPodSandbox for \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\"" Jul 2 00:25:27.044763 containerd[1458]: time="2024-07-02T00:25:27.044770769Z" level=info msg="Container to stop \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:25:27.054622 systemd[1]: cri-containerd-231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484.scope: Deactivated successfully. Jul 2 00:25:27.362447 containerd[1458]: time="2024-07-02T00:25:27.362256166Z" level=info msg="shim disconnected" id=231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484 namespace=k8s.io Jul 2 00:25:27.363542 containerd[1458]: time="2024-07-02T00:25:27.363512888Z" level=warning msg="cleaning up after shim disconnected" id=231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484 namespace=k8s.io Jul 2 00:25:27.363542 containerd[1458]: time="2024-07-02T00:25:27.363538096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:27.378578 containerd[1458]: time="2024-07-02T00:25:27.378518272Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:25:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:25:27.380098 containerd[1458]: time="2024-07-02T00:25:27.380050751Z" level=info msg="TearDown network for sandbox \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\" successfully" Jul 2 00:25:27.380098 containerd[1458]: time="2024-07-02T00:25:27.380096197Z" level=info msg="StopPodSandbox for \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\" returns successfully" Jul 2 00:25:27.427431 kubelet[2636]: I0702 00:25:27.427373 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lglc\" (UniqueName: \"kubernetes.io/projected/5144195b-1e6c-4149-a83d-a7e9f00c1f70-kube-api-access-7lglc\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.427431 kubelet[2636]: I0702 00:25:27.427431 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5144195b-1e6c-4149-a83d-a7e9f00c1f70-tigera-ca-bundle\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.427691 kubelet[2636]: I0702 00:25:27.427461 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-xtables-lock\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.427691 kubelet[2636]: I0702 00:25:27.427480 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-var-lib-calico\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.427691 kubelet[2636]: I0702 00:25:27.427507 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5144195b-1e6c-4149-a83d-a7e9f00c1f70-node-certs\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.427691 kubelet[2636]: I0702 00:25:27.427525 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-lib-modules\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.427691 kubelet[2636]: I0702 00:25:27.427545 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-policysync\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.427691 kubelet[2636]: I0702 00:25:27.427629 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-policysync" (OuterVolumeSpecName: "policysync") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:25:27.427989 kubelet[2636]: I0702 00:25:27.427927 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:25:27.428166 kubelet[2636]: I0702 00:25:27.428000 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:25:27.428166 kubelet[2636]: I0702 00:25:27.428060 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:25:27.428348 kubelet[2636]: I0702 00:25:27.428314 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5144195b-1e6c-4149-a83d-a7e9f00c1f70-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:25:27.432121 kubelet[2636]: I0702 00:25:27.432039 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5144195b-1e6c-4149-a83d-a7e9f00c1f70-node-certs" (OuterVolumeSpecName: "node-certs") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:25:27.432490 kubelet[2636]: I0702 00:25:27.432450 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5144195b-1e6c-4149-a83d-a7e9f00c1f70-kube-api-access-7lglc" (OuterVolumeSpecName: "kube-api-access-7lglc") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "kube-api-access-7lglc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:25:27.528814 kubelet[2636]: I0702 00:25:27.528738 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-log-dir\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.528814 kubelet[2636]: I0702 00:25:27.528787 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-bin-dir\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.528814 kubelet[2636]: I0702 00:25:27.528806 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-flexvol-driver-host\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.529172 kubelet[2636]: I0702 00:25:27.528851 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-var-run-calico\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.529172 kubelet[2636]: I0702 00:25:27.528889 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-net-dir\") pod \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\" (UID: \"5144195b-1e6c-4149-a83d-a7e9f00c1f70\") " Jul 2 00:25:27.529172 kubelet[2636]: I0702 00:25:27.528885 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:25:27.529172 kubelet[2636]: I0702 00:25:27.528931 2636 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.529172 kubelet[2636]: I0702 00:25:27.528929 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:25:27.529172 kubelet[2636]: I0702 00:25:27.528941 2636 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-policysync\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.529380 kubelet[2636]: I0702 00:25:27.528934 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:25:27.529380 kubelet[2636]: I0702 00:25:27.528950 2636 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.529380 kubelet[2636]: I0702 00:25:27.529003 2636 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7lglc\" (UniqueName: \"kubernetes.io/projected/5144195b-1e6c-4149-a83d-a7e9f00c1f70-kube-api-access-7lglc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.529380 kubelet[2636]: I0702 00:25:27.529013 2636 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5144195b-1e6c-4149-a83d-a7e9f00c1f70-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.529380 kubelet[2636]: I0702 00:25:27.529022 2636 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.529380 kubelet[2636]: I0702 00:25:27.529029 2636 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.529380 kubelet[2636]: I0702 00:25:27.529054 2636 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5144195b-1e6c-4149-a83d-a7e9f00c1f70-node-certs\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.529609 kubelet[2636]: I0702 00:25:27.528949 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:25:27.529609 kubelet[2636]: I0702 00:25:27.529040 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "5144195b-1e6c-4149-a83d-a7e9f00c1f70" (UID: "5144195b-1e6c-4149-a83d-a7e9f00c1f70"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:25:27.630204 kubelet[2636]: I0702 00:25:27.630140 2636 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.630204 kubelet[2636]: I0702 00:25:27.630189 2636 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.630204 kubelet[2636]: I0702 00:25:27.630199 2636 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.630204 kubelet[2636]: I0702 00:25:27.630208 2636 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5144195b-1e6c-4149-a83d-a7e9f00c1f70-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:27.739136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484-rootfs.mount: Deactivated successfully. Jul 2 00:25:27.739280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484-shm.mount: Deactivated successfully. Jul 2 00:25:27.739367 systemd[1]: var-lib-kubelet-pods-5144195b\x2d1e6c\x2d4149\x2da83d\x2da7e9f00c1f70-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7lglc.mount: Deactivated successfully. Jul 2 00:25:27.739451 systemd[1]: var-lib-kubelet-pods-5144195b\x2d1e6c\x2d4149\x2da83d\x2da7e9f00c1f70-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 00:25:27.936676 kubelet[2636]: I0702 00:25:27.936023 2636 scope.go:117] "RemoveContainer" containerID="6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f" Jul 2 00:25:27.938312 containerd[1458]: time="2024-07-02T00:25:27.937911161Z" level=info msg="RemoveContainer for \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\"" Jul 2 00:25:27.944011 containerd[1458]: time="2024-07-02T00:25:27.943968295Z" level=info msg="RemoveContainer for \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\" returns successfully" Jul 2 00:25:27.947680 kubelet[2636]: I0702 00:25:27.947634 2636 scope.go:117] "RemoveContainer" containerID="6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f" Jul 2 00:25:27.947947 containerd[1458]: time="2024-07-02T00:25:27.947847299Z" level=error msg="ContainerStatus for \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\": not found" Jul 2 00:25:27.948081 kubelet[2636]: E0702 00:25:27.948041 2636 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\": not found" containerID="6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f" Jul 2 00:25:27.948142 kubelet[2636]: I0702 00:25:27.948077 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f"} err="failed to get container status \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c6a3d01a3a6209dc24588a9786d4b92386ed1cd4de57c9b7ce4c1dfc3e25c3f\": not found" Jul 2 00:25:27.951180 systemd[1]: Removed slice kubepods-besteffort-pod5144195b_1e6c_4149_a83d_a7e9f00c1f70.slice - libcontainer container kubepods-besteffort-pod5144195b_1e6c_4149_a83d_a7e9f00c1f70.slice. Jul 2 00:25:27.989884 kubelet[2636]: I0702 00:25:27.988837 2636 topology_manager.go:215] "Topology Admit Handler" podUID="0698794a-d406-448b-9a48-0dec52dea360" podNamespace="calico-system" podName="calico-node-pknnd" Jul 2 00:25:27.990498 kubelet[2636]: E0702 00:25:27.990402 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5144195b-1e6c-4149-a83d-a7e9f00c1f70" containerName="flexvol-driver" Jul 2 00:25:27.991817 kubelet[2636]: I0702 00:25:27.991300 2636 memory_manager.go:354] "RemoveStaleState removing state" podUID="5144195b-1e6c-4149-a83d-a7e9f00c1f70" containerName="flexvol-driver" Jul 2 00:25:28.004715 systemd[1]: Created slice kubepods-besteffort-pod0698794a_d406_448b_9a48_0dec52dea360.slice - libcontainer container kubepods-besteffort-pod0698794a_d406_448b_9a48_0dec52dea360.slice. Jul 2 00:25:28.032585 kubelet[2636]: I0702 00:25:28.032518 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0698794a-d406-448b-9a48-0dec52dea360-cni-log-dir\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032585 kubelet[2636]: I0702 00:25:28.032575 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdf6\" (UniqueName: \"kubernetes.io/projected/0698794a-d406-448b-9a48-0dec52dea360-kube-api-access-stdf6\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032793 kubelet[2636]: I0702 00:25:28.032603 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0698794a-d406-448b-9a48-0dec52dea360-flexvol-driver-host\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032793 kubelet[2636]: I0702 00:25:28.032624 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0698794a-d406-448b-9a48-0dec52dea360-var-run-calico\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032793 kubelet[2636]: I0702 00:25:28.032738 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0698794a-d406-448b-9a48-0dec52dea360-policysync\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032793 kubelet[2636]: I0702 00:25:28.032780 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0698794a-d406-448b-9a48-0dec52dea360-node-certs\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032925 kubelet[2636]: I0702 00:25:28.032800 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0698794a-d406-448b-9a48-0dec52dea360-cni-net-dir\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032925 kubelet[2636]: I0702 00:25:28.032819 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0698794a-d406-448b-9a48-0dec52dea360-lib-modules\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032925 kubelet[2636]: I0702 00:25:28.032845 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0698794a-d406-448b-9a48-0dec52dea360-var-lib-calico\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032925 kubelet[2636]: I0702 00:25:28.032887 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0698794a-d406-448b-9a48-0dec52dea360-xtables-lock\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.032925 kubelet[2636]: I0702 00:25:28.032902 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0698794a-d406-448b-9a48-0dec52dea360-cni-bin-dir\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.033032 kubelet[2636]: I0702 00:25:28.032918 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0698794a-d406-448b-9a48-0dec52dea360-tigera-ca-bundle\") pod \"calico-node-pknnd\" (UID: \"0698794a-d406-448b-9a48-0dec52dea360\") " pod="calico-system/calico-node-pknnd" Jul 2 00:25:28.311981 kubelet[2636]: E0702 00:25:28.311785 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:28.313198 containerd[1458]: time="2024-07-02T00:25:28.312547384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pknnd,Uid:0698794a-d406-448b-9a48-0dec52dea360,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:28.340325 containerd[1458]: time="2024-07-02T00:25:28.340165776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:28.340495 containerd[1458]: time="2024-07-02T00:25:28.340289803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:28.340495 containerd[1458]: time="2024-07-02T00:25:28.340325861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:28.340495 containerd[1458]: time="2024-07-02T00:25:28.340339458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:28.361058 systemd[1]: Started cri-containerd-56f0575fb29961b33bcf41abcb77fd4c2561bf9cef6192fc129d8aa88cedc48c.scope - libcontainer container 56f0575fb29961b33bcf41abcb77fd4c2561bf9cef6192fc129d8aa88cedc48c. Jul 2 00:25:28.396511 containerd[1458]: time="2024-07-02T00:25:28.396431274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pknnd,Uid:0698794a-d406-448b-9a48-0dec52dea360,Namespace:calico-system,Attempt:0,} returns sandbox id \"56f0575fb29961b33bcf41abcb77fd4c2561bf9cef6192fc129d8aa88cedc48c\"" Jul 2 00:25:28.398621 kubelet[2636]: E0702 00:25:28.398583 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:28.402734 containerd[1458]: time="2024-07-02T00:25:28.402690049Z" level=info msg="CreateContainer within sandbox \"56f0575fb29961b33bcf41abcb77fd4c2561bf9cef6192fc129d8aa88cedc48c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:25:28.426317 containerd[1458]: time="2024-07-02T00:25:28.426227446Z" level=info msg="CreateContainer within sandbox \"56f0575fb29961b33bcf41abcb77fd4c2561bf9cef6192fc129d8aa88cedc48c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a3bdcdec8595eb47e541d63edbf9c7d15bfb6f634d73e2fc0927debb88f9d408\"" Jul 2 00:25:28.428133 containerd[1458]: time="2024-07-02T00:25:28.426900462Z" level=info msg="StartContainer for \"a3bdcdec8595eb47e541d63edbf9c7d15bfb6f634d73e2fc0927debb88f9d408\"" Jul 2 00:25:28.463022 systemd[1]: Started cri-containerd-a3bdcdec8595eb47e541d63edbf9c7d15bfb6f634d73e2fc0927debb88f9d408.scope - libcontainer container a3bdcdec8595eb47e541d63edbf9c7d15bfb6f634d73e2fc0927debb88f9d408. Jul 2 00:25:28.515424 systemd[1]: cri-containerd-a3bdcdec8595eb47e541d63edbf9c7d15bfb6f634d73e2fc0927debb88f9d408.scope: Deactivated successfully. Jul 2 00:25:28.793896 kubelet[2636]: E0702 00:25:28.792374 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:28.810791 containerd[1458]: time="2024-07-02T00:25:28.810729205Z" level=info msg="StartContainer for \"a3bdcdec8595eb47e541d63edbf9c7d15bfb6f634d73e2fc0927debb88f9d408\" returns successfully" Jul 2 00:25:28.811673 kubelet[2636]: I0702 00:25:28.811573 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5144195b-1e6c-4149-a83d-a7e9f00c1f70" path="/var/lib/kubelet/pods/5144195b-1e6c-4149-a83d-a7e9f00c1f70/volumes" Jul 2 00:25:28.833581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3bdcdec8595eb47e541d63edbf9c7d15bfb6f634d73e2fc0927debb88f9d408-rootfs.mount: Deactivated successfully. Jul 2 00:25:28.939991 kubelet[2636]: E0702 00:25:28.939930 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:29.107977 containerd[1458]: time="2024-07-02T00:25:29.107737164Z" level=info msg="shim disconnected" id=a3bdcdec8595eb47e541d63edbf9c7d15bfb6f634d73e2fc0927debb88f9d408 namespace=k8s.io Jul 2 00:25:29.107977 containerd[1458]: time="2024-07-02T00:25:29.107824871Z" level=warning msg="cleaning up after shim disconnected" id=a3bdcdec8595eb47e541d63edbf9c7d15bfb6f634d73e2fc0927debb88f9d408 namespace=k8s.io Jul 2 00:25:29.107977 containerd[1458]: time="2024-07-02T00:25:29.107838367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:29.944547 kubelet[2636]: E0702 00:25:29.944498 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:30.688692 containerd[1458]: time="2024-07-02T00:25:30.688523485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:30.691128 containerd[1458]: time="2024-07-02T00:25:30.691038213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:25:30.695070 containerd[1458]: time="2024-07-02T00:25:30.695004252Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:30.700568 containerd[1458]: time="2024-07-02T00:25:30.700507883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:30.701260 containerd[1458]: time="2024-07-02T00:25:30.701220904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.067944966s" Jul 2 00:25:30.701330 containerd[1458]: time="2024-07-02T00:25:30.701263165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:25:30.711351 containerd[1458]: time="2024-07-02T00:25:30.711271895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:25:30.752375 containerd[1458]: time="2024-07-02T00:25:30.752237678Z" level=info msg="CreateContainer within sandbox \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:25:30.793742 kubelet[2636]: E0702 00:25:30.792471 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:31.977045 containerd[1458]: time="2024-07-02T00:25:31.976988372Z" level=info msg="CreateContainer within sandbox \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\"" Jul 2 00:25:31.977655 containerd[1458]: time="2024-07-02T00:25:31.977454231Z" level=info msg="StartContainer for \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\"" Jul 2 00:25:32.016004 systemd[1]: Started cri-containerd-a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b.scope - libcontainer container a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b. Jul 2 00:25:32.714457 containerd[1458]: time="2024-07-02T00:25:32.714263319Z" level=info msg="StartContainer for \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\" returns successfully" Jul 2 00:25:32.792926 kubelet[2636]: E0702 00:25:32.792802 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:32.957147 containerd[1458]: time="2024-07-02T00:25:32.956926250Z" level=info msg="StopContainer for \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\" with timeout 300 (s)" Jul 2 00:25:32.958763 containerd[1458]: time="2024-07-02T00:25:32.958588440Z" level=info msg="Stop container \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\" with signal terminated" Jul 2 00:25:32.972168 systemd[1]: cri-containerd-a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b.scope: Deactivated successfully. Jul 2 00:25:33.008900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b-rootfs.mount: Deactivated successfully. Jul 2 00:25:33.117624 containerd[1458]: time="2024-07-02T00:25:33.117535072Z" level=info msg="shim disconnected" id=a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b namespace=k8s.io Jul 2 00:25:33.117624 containerd[1458]: time="2024-07-02T00:25:33.117601999Z" level=warning msg="cleaning up after shim disconnected" id=a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b namespace=k8s.io Jul 2 00:25:33.117624 containerd[1458]: time="2024-07-02T00:25:33.117615455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:33.130355 kubelet[2636]: I0702 00:25:33.129889 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-868b58b984-lgvkj" podStartSLOduration=4.849218417 podStartE2EDuration="11.129834242s" podCreationTimestamp="2024-07-02 00:25:22 +0000 UTC" firstStartedPulling="2024-07-02 00:25:24.430276034 +0000 UTC m=+25.726247479" lastFinishedPulling="2024-07-02 00:25:30.710891859 +0000 UTC m=+32.006863304" observedRunningTime="2024-07-02 00:25:33.128719007 +0000 UTC m=+34.424690482" watchObservedRunningTime="2024-07-02 00:25:33.129834242 +0000 UTC m=+34.425805687" Jul 2 00:25:33.147666 containerd[1458]: time="2024-07-02T00:25:33.147582699Z" level=info msg="StopContainer for \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\" returns successfully" Jul 2 00:25:33.149635 containerd[1458]: time="2024-07-02T00:25:33.149324849Z" level=info msg="StopPodSandbox for \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\"" Jul 2 00:25:33.149635 containerd[1458]: time="2024-07-02T00:25:33.149412325Z" level=info msg="Container to stop \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:25:33.153419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb-shm.mount: Deactivated successfully. Jul 2 00:25:33.165681 systemd[1]: cri-containerd-64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb.scope: Deactivated successfully. Jul 2 00:25:33.194312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb-rootfs.mount: Deactivated successfully. Jul 2 00:25:33.206676 containerd[1458]: time="2024-07-02T00:25:33.206604616Z" level=info msg="shim disconnected" id=64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb namespace=k8s.io Jul 2 00:25:33.207042 containerd[1458]: time="2024-07-02T00:25:33.206993026Z" level=warning msg="cleaning up after shim disconnected" id=64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb namespace=k8s.io Jul 2 00:25:33.207042 containerd[1458]: time="2024-07-02T00:25:33.207020308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:33.227647 containerd[1458]: time="2024-07-02T00:25:33.227294577Z" level=info msg="TearDown network for sandbox \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\" successfully" Jul 2 00:25:33.227647 containerd[1458]: time="2024-07-02T00:25:33.227335867Z" level=info msg="StopPodSandbox for \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\" returns successfully" Jul 2 00:25:33.252761 kubelet[2636]: I0702 00:25:33.252184 2636 topology_manager.go:215] "Topology Admit Handler" podUID="3ce5879d-7edc-41e4-b12f-1d06f358808c" podNamespace="calico-system" podName="calico-typha-79b5b5db67-2ws9j" Jul 2 00:25:33.252761 kubelet[2636]: E0702 00:25:33.252259 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73" containerName="calico-typha" Jul 2 00:25:33.252761 kubelet[2636]: I0702 00:25:33.252285 2636 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73" containerName="calico-typha" Jul 2 00:25:33.265024 systemd[1]: Created slice kubepods-besteffort-pod3ce5879d_7edc_41e4_b12f_1d06f358808c.slice - libcontainer container kubepods-besteffort-pod3ce5879d_7edc_41e4_b12f_1d06f358808c.slice. Jul 2 00:25:33.325102 kubelet[2636]: I0702 00:25:33.325021 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ce5879d-7edc-41e4-b12f-1d06f358808c-tigera-ca-bundle\") pod \"calico-typha-79b5b5db67-2ws9j\" (UID: \"3ce5879d-7edc-41e4-b12f-1d06f358808c\") " pod="calico-system/calico-typha-79b5b5db67-2ws9j" Jul 2 00:25:33.325102 kubelet[2636]: I0702 00:25:33.325082 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ce5879d-7edc-41e4-b12f-1d06f358808c-typha-certs\") pod \"calico-typha-79b5b5db67-2ws9j\" (UID: \"3ce5879d-7edc-41e4-b12f-1d06f358808c\") " pod="calico-system/calico-typha-79b5b5db67-2ws9j" Jul 2 00:25:33.325102 kubelet[2636]: I0702 00:25:33.325115 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vhps\" (UniqueName: \"kubernetes.io/projected/3ce5879d-7edc-41e4-b12f-1d06f358808c-kube-api-access-5vhps\") pod \"calico-typha-79b5b5db67-2ws9j\" (UID: \"3ce5879d-7edc-41e4-b12f-1d06f358808c\") " pod="calico-system/calico-typha-79b5b5db67-2ws9j" Jul 2 00:25:33.426231 kubelet[2636]: I0702 00:25:33.425410 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-typha-certs\") pod \"d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73\" (UID: \"d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73\") " Jul 2 00:25:33.426231 kubelet[2636]: I0702 00:25:33.425464 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjdl2\" (UniqueName: \"kubernetes.io/projected/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-kube-api-access-sjdl2\") pod \"d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73\" (UID: \"d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73\") " Jul 2 00:25:33.426231 kubelet[2636]: I0702 00:25:33.425500 2636 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-tigera-ca-bundle\") pod \"d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73\" (UID: \"d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73\") " Jul 2 00:25:33.432559 systemd[1]: var-lib-kubelet-pods-d89bbb42\x2d0c74\x2d4c7c\x2d96e1\x2dc1c5ddbc9f73-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jul 2 00:25:33.436900 kubelet[2636]: I0702 00:25:33.436837 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-kube-api-access-sjdl2" (OuterVolumeSpecName: "kube-api-access-sjdl2") pod "d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73" (UID: "d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73"). InnerVolumeSpecName "kube-api-access-sjdl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:25:33.439368 kubelet[2636]: I0702 00:25:33.439304 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73" (UID: "d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:25:33.440325 systemd[1]: var-lib-kubelet-pods-d89bbb42\x2d0c74\x2d4c7c\x2d96e1\x2dc1c5ddbc9f73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjdl2.mount: Deactivated successfully. Jul 2 00:25:33.445466 kubelet[2636]: I0702 00:25:33.445410 2636 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73" (UID: "d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:25:33.526446 kubelet[2636]: I0702 00:25:33.526275 2636 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-typha-certs\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:33.526446 kubelet[2636]: I0702 00:25:33.526336 2636 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sjdl2\" (UniqueName: \"kubernetes.io/projected/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-kube-api-access-sjdl2\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:33.526446 kubelet[2636]: I0702 00:25:33.526353 2636 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 2 00:25:33.620436 kubelet[2636]: E0702 00:25:33.620384 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:33.621728 containerd[1458]: time="2024-07-02T00:25:33.621254740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79b5b5db67-2ws9j,Uid:3ce5879d-7edc-41e4-b12f-1d06f358808c,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:33.958755 kubelet[2636]: I0702 00:25:33.958710 2636 scope.go:117] "RemoveContainer" containerID="a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b" Jul 2 00:25:33.960044 containerd[1458]: time="2024-07-02T00:25:33.959937220Z" level=info msg="RemoveContainer for \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\"" Jul 2 00:25:33.965711 systemd[1]: Removed slice kubepods-besteffort-podd89bbb42_0c74_4c7c_96e1_c1c5ddbc9f73.slice - libcontainer container kubepods-besteffort-podd89bbb42_0c74_4c7c_96e1_c1c5ddbc9f73.slice. Jul 2 00:25:34.008155 systemd[1]: var-lib-kubelet-pods-d89bbb42\x2d0c74\x2d4c7c\x2d96e1\x2dc1c5ddbc9f73-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jul 2 00:25:34.408096 containerd[1458]: time="2024-07-02T00:25:34.408038198Z" level=info msg="RemoveContainer for \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\" returns successfully" Jul 2 00:25:34.408656 kubelet[2636]: I0702 00:25:34.408384 2636 scope.go:117] "RemoveContainer" containerID="a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b" Jul 2 00:25:34.408763 containerd[1458]: time="2024-07-02T00:25:34.408715237Z" level=error msg="ContainerStatus for \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\": not found" Jul 2 00:25:34.408988 kubelet[2636]: E0702 00:25:34.408962 2636 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\": not found" containerID="a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b" Jul 2 00:25:34.409043 kubelet[2636]: I0702 00:25:34.408993 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b"} err="failed to get container status \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7dbd4a27540c8622620be473110b6b61a998ee17f008c588db492eb7ae60b0b\": not found" Jul 2 00:25:34.458620 containerd[1458]: time="2024-07-02T00:25:34.458481918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:34.458620 containerd[1458]: time="2024-07-02T00:25:34.458578643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:34.458620 containerd[1458]: time="2024-07-02T00:25:34.458595434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:34.458620 containerd[1458]: time="2024-07-02T00:25:34.458605934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:34.478094 systemd[1]: run-containerd-runc-k8s.io-f2e6c5b2ae7d747e8e56d03cdbc61c2c59fdced14a71519f85d4a36f2281f3b1-runc.ScL85S.mount: Deactivated successfully. Jul 2 00:25:34.491026 systemd[1]: Started cri-containerd-f2e6c5b2ae7d747e8e56d03cdbc61c2c59fdced14a71519f85d4a36f2281f3b1.scope - libcontainer container f2e6c5b2ae7d747e8e56d03cdbc61c2c59fdced14a71519f85d4a36f2281f3b1. Jul 2 00:25:34.538775 containerd[1458]: time="2024-07-02T00:25:34.538648443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79b5b5db67-2ws9j,Uid:3ce5879d-7edc-41e4-b12f-1d06f358808c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2e6c5b2ae7d747e8e56d03cdbc61c2c59fdced14a71519f85d4a36f2281f3b1\"" Jul 2 00:25:34.539339 kubelet[2636]: E0702 00:25:34.539300 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:34.547773 containerd[1458]: time="2024-07-02T00:25:34.547726778Z" level=info msg="CreateContainer within sandbox \"f2e6c5b2ae7d747e8e56d03cdbc61c2c59fdced14a71519f85d4a36f2281f3b1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:25:34.792701 kubelet[2636]: E0702 00:25:34.792539 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:34.795304 kubelet[2636]: I0702 00:25:34.795275 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73" path="/var/lib/kubelet/pods/d89bbb42-0c74-4c7c-96e1-c1c5ddbc9f73/volumes" Jul 2 00:25:35.519563 containerd[1458]: time="2024-07-02T00:25:35.519489602Z" level=info msg="CreateContainer within sandbox \"f2e6c5b2ae7d747e8e56d03cdbc61c2c59fdced14a71519f85d4a36f2281f3b1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ac66aa11f9cfc30d5283950870dc1c065f66f431c6b591dbfb350802681129bd\"" Jul 2 00:25:35.520523 containerd[1458]: time="2024-07-02T00:25:35.520473376Z" level=info msg="StartContainer for \"ac66aa11f9cfc30d5283950870dc1c065f66f431c6b591dbfb350802681129bd\"" Jul 2 00:25:35.568184 systemd[1]: Started cri-containerd-ac66aa11f9cfc30d5283950870dc1c065f66f431c6b591dbfb350802681129bd.scope - libcontainer container ac66aa11f9cfc30d5283950870dc1c065f66f431c6b591dbfb350802681129bd. Jul 2 00:25:35.682465 containerd[1458]: time="2024-07-02T00:25:35.682387510Z" level=info msg="StartContainer for \"ac66aa11f9cfc30d5283950870dc1c065f66f431c6b591dbfb350802681129bd\" returns successfully" Jul 2 00:25:35.966164 kubelet[2636]: E0702 00:25:35.966116 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:35.977119 kubelet[2636]: I0702 00:25:35.977011 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79b5b5db67-2ws9j" podStartSLOduration=11.976991113 podStartE2EDuration="11.976991113s" podCreationTimestamp="2024-07-02 00:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:25:35.976510488 +0000 UTC m=+37.272481933" watchObservedRunningTime="2024-07-02 00:25:35.976991113 +0000 UTC m=+37.272962558" Jul 2 00:25:36.793075 kubelet[2636]: E0702 00:25:36.793004 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:36.968230 kubelet[2636]: E0702 00:25:36.968181 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:37.061764 containerd[1458]: time="2024-07-02T00:25:37.061606572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:37.062937 containerd[1458]: time="2024-07-02T00:25:37.062648806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:25:37.065528 containerd[1458]: time="2024-07-02T00:25:37.065469083Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:37.068494 containerd[1458]: time="2024-07-02T00:25:37.068422222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:37.069310 containerd[1458]: time="2024-07-02T00:25:37.069246511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 6.357909522s" Jul 2 00:25:37.069310 containerd[1458]: time="2024-07-02T00:25:37.069297117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:25:37.071718 containerd[1458]: time="2024-07-02T00:25:37.071679350Z" level=info msg="CreateContainer within sandbox \"56f0575fb29961b33bcf41abcb77fd4c2561bf9cef6192fc129d8aa88cedc48c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:25:37.095889 containerd[1458]: time="2024-07-02T00:25:37.095817785Z" level=info msg="CreateContainer within sandbox \"56f0575fb29961b33bcf41abcb77fd4c2561bf9cef6192fc129d8aa88cedc48c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"04a7cbe3eb35cb725680475bd2768b4271f2a33b45bcbed28a75f99d82eb5d99\"" Jul 2 00:25:37.098771 containerd[1458]: time="2024-07-02T00:25:37.096754478Z" level=info msg="StartContainer for \"04a7cbe3eb35cb725680475bd2768b4271f2a33b45bcbed28a75f99d82eb5d99\"" Jul 2 00:25:37.133147 systemd[1]: Started cri-containerd-04a7cbe3eb35cb725680475bd2768b4271f2a33b45bcbed28a75f99d82eb5d99.scope - libcontainer container 04a7cbe3eb35cb725680475bd2768b4271f2a33b45bcbed28a75f99d82eb5d99. Jul 2 00:25:37.215599 containerd[1458]: time="2024-07-02T00:25:37.215534822Z" level=info msg="StartContainer for \"04a7cbe3eb35cb725680475bd2768b4271f2a33b45bcbed28a75f99d82eb5d99\" returns successfully" Jul 2 00:25:37.885953 systemd[1]: Started sshd@9-10.0.0.122:22-10.0.0.1:36258.service - OpenSSH per-connection server daemon (10.0.0.1:36258). Jul 2 00:25:37.971253 kubelet[2636]: E0702 00:25:37.971108 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:37.971808 kubelet[2636]: E0702 00:25:37.971347 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:38.022327 sshd[3639]: Accepted publickey for core from 10.0.0.1 port 36258 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:25:38.025058 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:38.032593 systemd-logind[1441]: New session 10 of user core. Jul 2 00:25:38.041261 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:25:38.379193 sshd[3639]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:38.385117 systemd[1]: sshd@9-10.0.0.122:22-10.0.0.1:36258.service: Deactivated successfully. Jul 2 00:25:38.388010 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:25:38.388892 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:25:38.391209 systemd-logind[1441]: Removed session 10. Jul 2 00:25:38.547203 containerd[1458]: time="2024-07-02T00:25:38.547131200Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:25:38.550776 systemd[1]: cri-containerd-04a7cbe3eb35cb725680475bd2768b4271f2a33b45bcbed28a75f99d82eb5d99.scope: Deactivated successfully. Jul 2 00:25:38.572260 kubelet[2636]: I0702 00:25:38.571372 2636 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:25:38.575578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04a7cbe3eb35cb725680475bd2768b4271f2a33b45bcbed28a75f99d82eb5d99-rootfs.mount: Deactivated successfully. Jul 2 00:25:38.799925 systemd[1]: Created slice kubepods-besteffort-podc3d03670_d650_4edf_88af_0fb85e858e8c.slice - libcontainer container kubepods-besteffort-podc3d03670_d650_4edf_88af_0fb85e858e8c.slice. Jul 2 00:25:38.802745 containerd[1458]: time="2024-07-02T00:25:38.802680201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7nzzg,Uid:c3d03670-d650-4edf-88af-0fb85e858e8c,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:38.835292 kubelet[2636]: I0702 00:25:38.833562 2636 topology_manager.go:215] "Topology Admit Handler" podUID="975d7404-b4fe-4099-94e1-d75e094c0eea" podNamespace="calico-system" podName="calico-kube-controllers-656cfcb6dd-26g68" Jul 2 00:25:38.835292 kubelet[2636]: I0702 00:25:38.835265 2636 topology_manager.go:215] "Topology Admit Handler" podUID="0df39595-d448-474d-85a9-1efe22750b9b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jh2jm" Jul 2 00:25:38.835580 kubelet[2636]: I0702 00:25:38.835373 2636 topology_manager.go:215] "Topology Admit Handler" podUID="dda3dde4-f9bb-48b1-b9f5-9ee16ac96658" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8fw82" Jul 2 00:25:38.842881 systemd[1]: Created slice kubepods-besteffort-pod975d7404_b4fe_4099_94e1_d75e094c0eea.slice - libcontainer container kubepods-besteffort-pod975d7404_b4fe_4099_94e1_d75e094c0eea.slice. Jul 2 00:25:38.848615 systemd[1]: Created slice kubepods-burstable-poddda3dde4_f9bb_48b1_b9f5_9ee16ac96658.slice - libcontainer container kubepods-burstable-poddda3dde4_f9bb_48b1_b9f5_9ee16ac96658.slice. Jul 2 00:25:38.859214 systemd[1]: Created slice kubepods-burstable-pod0df39595_d448_474d_85a9_1efe22750b9b.slice - libcontainer container kubepods-burstable-pod0df39595_d448_474d_85a9_1efe22750b9b.slice. Jul 2 00:25:38.966192 kubelet[2636]: I0702 00:25:38.966104 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqpnj\" (UniqueName: \"kubernetes.io/projected/0df39595-d448-474d-85a9-1efe22750b9b-kube-api-access-zqpnj\") pod \"coredns-7db6d8ff4d-jh2jm\" (UID: \"0df39595-d448-474d-85a9-1efe22750b9b\") " pod="kube-system/coredns-7db6d8ff4d-jh2jm" Jul 2 00:25:38.966192 kubelet[2636]: I0702 00:25:38.966188 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dlf9\" (UniqueName: \"kubernetes.io/projected/975d7404-b4fe-4099-94e1-d75e094c0eea-kube-api-access-5dlf9\") pod \"calico-kube-controllers-656cfcb6dd-26g68\" (UID: \"975d7404-b4fe-4099-94e1-d75e094c0eea\") " pod="calico-system/calico-kube-controllers-656cfcb6dd-26g68" Jul 2 00:25:38.966192 kubelet[2636]: I0702 00:25:38.966218 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dda3dde4-f9bb-48b1-b9f5-9ee16ac96658-config-volume\") pod \"coredns-7db6d8ff4d-8fw82\" (UID: \"dda3dde4-f9bb-48b1-b9f5-9ee16ac96658\") " pod="kube-system/coredns-7db6d8ff4d-8fw82" Jul 2 00:25:38.966498 kubelet[2636]: I0702 00:25:38.966275 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/975d7404-b4fe-4099-94e1-d75e094c0eea-tigera-ca-bundle\") pod \"calico-kube-controllers-656cfcb6dd-26g68\" (UID: \"975d7404-b4fe-4099-94e1-d75e094c0eea\") " pod="calico-system/calico-kube-controllers-656cfcb6dd-26g68" Jul 2 00:25:38.966498 kubelet[2636]: I0702 00:25:38.966297 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df39595-d448-474d-85a9-1efe22750b9b-config-volume\") pod \"coredns-7db6d8ff4d-jh2jm\" (UID: \"0df39595-d448-474d-85a9-1efe22750b9b\") " pod="kube-system/coredns-7db6d8ff4d-jh2jm" Jul 2 00:25:38.966498 kubelet[2636]: I0702 00:25:38.966320 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52klb\" (UniqueName: \"kubernetes.io/projected/dda3dde4-f9bb-48b1-b9f5-9ee16ac96658-kube-api-access-52klb\") pod \"coredns-7db6d8ff4d-8fw82\" (UID: \"dda3dde4-f9bb-48b1-b9f5-9ee16ac96658\") " pod="kube-system/coredns-7db6d8ff4d-8fw82" Jul 2 00:25:39.000165 kubelet[2636]: E0702 00:25:39.000096 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:39.293323 containerd[1458]: time="2024-07-02T00:25:39.293239071Z" level=info msg="shim disconnected" id=04a7cbe3eb35cb725680475bd2768b4271f2a33b45bcbed28a75f99d82eb5d99 namespace=k8s.io Jul 2 00:25:39.293323 containerd[1458]: time="2024-07-02T00:25:39.293314846Z" level=warning msg="cleaning up after shim disconnected" id=04a7cbe3eb35cb725680475bd2768b4271f2a33b45bcbed28a75f99d82eb5d99 namespace=k8s.io Jul 2 00:25:39.293323 containerd[1458]: time="2024-07-02T00:25:39.293325646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:39.446893 containerd[1458]: time="2024-07-02T00:25:39.446802566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-656cfcb6dd-26g68,Uid:975d7404-b4fe-4099-94e1-d75e094c0eea,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:39.452242 kubelet[2636]: E0702 00:25:39.452192 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:39.452826 containerd[1458]: time="2024-07-02T00:25:39.452772479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8fw82,Uid:dda3dde4-f9bb-48b1-b9f5-9ee16ac96658,Namespace:kube-system,Attempt:0,}" Jul 2 00:25:39.463460 kubelet[2636]: E0702 00:25:39.463405 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:39.464143 containerd[1458]: time="2024-07-02T00:25:39.464099549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jh2jm,Uid:0df39595-d448-474d-85a9-1efe22750b9b,Namespace:kube-system,Attempt:0,}" Jul 2 00:25:40.003358 kubelet[2636]: E0702 00:25:40.003303 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:40.004647 containerd[1458]: time="2024-07-02T00:25:40.004010099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:25:40.576765 containerd[1458]: time="2024-07-02T00:25:40.576651575Z" level=error msg="Failed to destroy network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.577627 containerd[1458]: time="2024-07-02T00:25:40.577467116Z" level=error msg="encountered an error cleaning up failed sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.577627 containerd[1458]: time="2024-07-02T00:25:40.577527220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7nzzg,Uid:c3d03670-d650-4edf-88af-0fb85e858e8c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.578310 kubelet[2636]: E0702 00:25:40.578248 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.578419 kubelet[2636]: E0702 00:25:40.578346 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7nzzg" Jul 2 00:25:40.578419 kubelet[2636]: E0702 00:25:40.578375 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7nzzg" Jul 2 00:25:40.578471 kubelet[2636]: E0702 00:25:40.578428 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7nzzg_calico-system(c3d03670-d650-4edf-88af-0fb85e858e8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7nzzg_calico-system(c3d03670-d650-4edf-88af-0fb85e858e8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:40.586786 containerd[1458]: time="2024-07-02T00:25:40.586716930Z" level=error msg="Failed to destroy network for sandbox \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.587782 containerd[1458]: time="2024-07-02T00:25:40.587703996Z" level=error msg="encountered an error cleaning up failed sandbox \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.588131 containerd[1458]: time="2024-07-02T00:25:40.587806831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-656cfcb6dd-26g68,Uid:975d7404-b4fe-4099-94e1-d75e094c0eea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.588228 kubelet[2636]: E0702 00:25:40.588102 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.588228 kubelet[2636]: E0702 00:25:40.588181 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-656cfcb6dd-26g68" Jul 2 00:25:40.588228 kubelet[2636]: E0702 00:25:40.588204 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-656cfcb6dd-26g68" Jul 2 00:25:40.588366 kubelet[2636]: E0702 00:25:40.588250 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-656cfcb6dd-26g68_calico-system(975d7404-b4fe-4099-94e1-d75e094c0eea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-656cfcb6dd-26g68_calico-system(975d7404-b4fe-4099-94e1-d75e094c0eea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-656cfcb6dd-26g68" podUID="975d7404-b4fe-4099-94e1-d75e094c0eea" Jul 2 00:25:40.599788 containerd[1458]: time="2024-07-02T00:25:40.599683920Z" level=error msg="Failed to destroy network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.600372 containerd[1458]: time="2024-07-02T00:25:40.600340358Z" level=error msg="encountered an error cleaning up failed sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.600471 containerd[1458]: time="2024-07-02T00:25:40.600397637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8fw82,Uid:dda3dde4-f9bb-48b1-b9f5-9ee16ac96658,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.600730 kubelet[2636]: E0702 00:25:40.600675 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.600790 kubelet[2636]: E0702 00:25:40.600749 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8fw82" Jul 2 00:25:40.600790 kubelet[2636]: E0702 00:25:40.600774 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8fw82" Jul 2 00:25:40.600839 kubelet[2636]: E0702 00:25:40.600816 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8fw82_kube-system(dda3dde4-f9bb-48b1-b9f5-9ee16ac96658)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8fw82_kube-system(dda3dde4-f9bb-48b1-b9f5-9ee16ac96658)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8fw82" podUID="dda3dde4-f9bb-48b1-b9f5-9ee16ac96658" Jul 2 00:25:40.603408 containerd[1458]: time="2024-07-02T00:25:40.603344409Z" level=error msg="Failed to destroy network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.603945 containerd[1458]: time="2024-07-02T00:25:40.603851773Z" level=error msg="encountered an error cleaning up failed sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.603945 containerd[1458]: time="2024-07-02T00:25:40.603929671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jh2jm,Uid:0df39595-d448-474d-85a9-1efe22750b9b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.604221 kubelet[2636]: E0702 00:25:40.604171 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:40.604277 kubelet[2636]: E0702 00:25:40.604238 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jh2jm" Jul 2 00:25:40.604277 kubelet[2636]: E0702 00:25:40.604259 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jh2jm" Jul 2 00:25:40.604375 kubelet[2636]: E0702 00:25:40.604313 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jh2jm_kube-system(0df39595-d448-474d-85a9-1efe22750b9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jh2jm_kube-system(0df39595-d448-474d-85a9-1efe22750b9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jh2jm" podUID="0df39595-d448-474d-85a9-1efe22750b9b" Jul 2 00:25:41.005791 kubelet[2636]: I0702 00:25:41.005750 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:25:41.007724 containerd[1458]: time="2024-07-02T00:25:41.006469777Z" level=info msg="StopPodSandbox for \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\"" Jul 2 00:25:41.007724 containerd[1458]: time="2024-07-02T00:25:41.006767333Z" level=info msg="Ensure that sandbox f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8 in task-service has been cleanup successfully" Jul 2 00:25:41.008189 kubelet[2636]: I0702 00:25:41.007546 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:25:41.008241 containerd[1458]: time="2024-07-02T00:25:41.008037216Z" level=info msg="StopPodSandbox for \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\"" Jul 2 00:25:41.008330 containerd[1458]: time="2024-07-02T00:25:41.008284656Z" level=info msg="Ensure that sandbox ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764 in task-service has been cleanup successfully" Jul 2 00:25:41.009468 kubelet[2636]: I0702 00:25:41.009435 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" Jul 2 00:25:41.010724 containerd[1458]: time="2024-07-02T00:25:41.010360822Z" level=info msg="StopPodSandbox for \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\"" Jul 2 00:25:41.010724 containerd[1458]: time="2024-07-02T00:25:41.010526076Z" level=info msg="Ensure that sandbox 6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647 in task-service has been cleanup successfully" Jul 2 00:25:41.013238 kubelet[2636]: I0702 00:25:41.013212 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:25:41.014465 containerd[1458]: time="2024-07-02T00:25:41.013994156Z" level=info msg="StopPodSandbox for \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\"" Jul 2 00:25:41.014465 containerd[1458]: time="2024-07-02T00:25:41.014214606Z" level=info msg="Ensure that sandbox 4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a in task-service has been cleanup successfully" Jul 2 00:25:41.049111 containerd[1458]: time="2024-07-02T00:25:41.049038065Z" level=error msg="StopPodSandbox for \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\" failed" error="failed to destroy network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:41.049445 kubelet[2636]: E0702 00:25:41.049390 2636 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:25:41.049513 kubelet[2636]: E0702 00:25:41.049466 2636 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764"} Jul 2 00:25:41.049549 kubelet[2636]: E0702 00:25:41.049509 2636 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dda3dde4-f9bb-48b1-b9f5-9ee16ac96658\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:25:41.049549 kubelet[2636]: E0702 00:25:41.049535 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dda3dde4-f9bb-48b1-b9f5-9ee16ac96658\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8fw82" podUID="dda3dde4-f9bb-48b1-b9f5-9ee16ac96658" Jul 2 00:25:41.055518 containerd[1458]: time="2024-07-02T00:25:41.055456613Z" level=error msg="StopPodSandbox for \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\" failed" error="failed to destroy network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:41.055765 kubelet[2636]: E0702 00:25:41.055725 2636 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:25:41.055994 kubelet[2636]: E0702 00:25:41.055895 2636 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8"} Jul 2 00:25:41.055994 kubelet[2636]: E0702 00:25:41.055941 2636 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0df39595-d448-474d-85a9-1efe22750b9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:25:41.055994 kubelet[2636]: E0702 00:25:41.055967 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0df39595-d448-474d-85a9-1efe22750b9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jh2jm" podUID="0df39595-d448-474d-85a9-1efe22750b9b" Jul 2 00:25:41.059424 containerd[1458]: time="2024-07-02T00:25:41.059372544Z" level=error msg="StopPodSandbox for \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\" failed" error="failed to destroy network for sandbox \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:41.059609 kubelet[2636]: E0702 00:25:41.059573 2636 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" Jul 2 00:25:41.059686 kubelet[2636]: E0702 00:25:41.059617 2636 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647"} Jul 2 00:25:41.059686 kubelet[2636]: E0702 00:25:41.059650 2636 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"975d7404-b4fe-4099-94e1-d75e094c0eea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:25:41.059686 kubelet[2636]: E0702 00:25:41.059671 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"975d7404-b4fe-4099-94e1-d75e094c0eea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-656cfcb6dd-26g68" podUID="975d7404-b4fe-4099-94e1-d75e094c0eea" Jul 2 00:25:41.064514 containerd[1458]: time="2024-07-02T00:25:41.064446177Z" level=error msg="StopPodSandbox for \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\" failed" error="failed to destroy network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:41.064703 kubelet[2636]: E0702 00:25:41.064675 2636 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:25:41.064776 kubelet[2636]: E0702 00:25:41.064706 2636 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a"} Jul 2 00:25:41.064776 kubelet[2636]: E0702 00:25:41.064746 2636 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3d03670-d650-4edf-88af-0fb85e858e8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:25:41.064913 kubelet[2636]: E0702 00:25:41.064775 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3d03670-d650-4edf-88af-0fb85e858e8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7nzzg" podUID="c3d03670-d650-4edf-88af-0fb85e858e8c" Jul 2 00:25:41.467891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764-shm.mount: Deactivated successfully. Jul 2 00:25:41.468020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647-shm.mount: Deactivated successfully. Jul 2 00:25:41.468126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a-shm.mount: Deactivated successfully. Jul 2 00:25:43.405227 systemd[1]: Started sshd@10-10.0.0.122:22-10.0.0.1:54102.service - OpenSSH per-connection server daemon (10.0.0.1:54102). Jul 2 00:25:43.468049 sshd[3937]: Accepted publickey for core from 10.0.0.1 port 54102 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:25:43.470208 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:43.475610 systemd-logind[1441]: New session 11 of user core. Jul 2 00:25:43.486346 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:25:43.627223 sshd[3937]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:43.632398 systemd[1]: sshd@10-10.0.0.122:22-10.0.0.1:54102.service: Deactivated successfully. Jul 2 00:25:43.635025 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:25:43.635825 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:25:43.637143 systemd-logind[1441]: Removed session 11. Jul 2 00:25:47.719988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690752459.mount: Deactivated successfully. Jul 2 00:25:48.646771 systemd[1]: Started sshd@11-10.0.0.122:22-10.0.0.1:60326.service - OpenSSH per-connection server daemon (10.0.0.1:60326). Jul 2 00:25:48.690432 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 60326 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:25:48.693326 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:48.699663 systemd-logind[1441]: New session 12 of user core. Jul 2 00:25:48.707197 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:25:48.910229 sshd[3959]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:48.916171 systemd[1]: sshd@11-10.0.0.122:22-10.0.0.1:60326.service: Deactivated successfully. Jul 2 00:25:48.920297 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:25:48.921589 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:25:48.922815 systemd-logind[1441]: Removed session 12. Jul 2 00:25:49.483589 containerd[1458]: time="2024-07-02T00:25:49.483529368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:49.511623 containerd[1458]: time="2024-07-02T00:25:49.511542898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:25:49.535586 containerd[1458]: time="2024-07-02T00:25:49.535492942Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:49.556310 containerd[1458]: time="2024-07-02T00:25:49.556232688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:49.557151 containerd[1458]: time="2024-07-02T00:25:49.557036984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 9.552981156s" Jul 2 00:25:49.557151 containerd[1458]: time="2024-07-02T00:25:49.557073322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:25:49.569249 containerd[1458]: time="2024-07-02T00:25:49.569185259Z" level=info msg="CreateContainer within sandbox \"56f0575fb29961b33bcf41abcb77fd4c2561bf9cef6192fc129d8aa88cedc48c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:25:49.739337 containerd[1458]: time="2024-07-02T00:25:49.739128605Z" level=info msg="CreateContainer within sandbox \"56f0575fb29961b33bcf41abcb77fd4c2561bf9cef6192fc129d8aa88cedc48c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fb39b9330be5e8a65183f07153bdded2bdf4680c0a270c4768923badda2f0563\"" Jul 2 00:25:49.740057 containerd[1458]: time="2024-07-02T00:25:49.740007702Z" level=info msg="StartContainer for \"fb39b9330be5e8a65183f07153bdded2bdf4680c0a270c4768923badda2f0563\"" Jul 2 00:25:49.814041 systemd[1]: Started cri-containerd-fb39b9330be5e8a65183f07153bdded2bdf4680c0a270c4768923badda2f0563.scope - libcontainer container fb39b9330be5e8a65183f07153bdded2bdf4680c0a270c4768923badda2f0563. Jul 2 00:25:50.326880 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:25:50.327227 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:25:51.612121 containerd[1458]: time="2024-07-02T00:25:51.612059235Z" level=info msg="StartContainer for \"fb39b9330be5e8a65183f07153bdded2bdf4680c0a270c4768923badda2f0563\" returns successfully" Jul 2 00:25:52.618724 kubelet[2636]: E0702 00:25:52.618660 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:52.749676 kubelet[2636]: I0702 00:25:52.749584 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pknnd" podStartSLOduration=6.139366999 podStartE2EDuration="25.749555284s" podCreationTimestamp="2024-07-02 00:25:27 +0000 UTC" firstStartedPulling="2024-07-02 00:25:29.947907396 +0000 UTC m=+31.243878841" lastFinishedPulling="2024-07-02 00:25:49.558095691 +0000 UTC m=+50.854067126" observedRunningTime="2024-07-02 00:25:52.749337311 +0000 UTC m=+54.045308766" watchObservedRunningTime="2024-07-02 00:25:52.749555284 +0000 UTC m=+54.045526729" Jul 2 00:25:52.793336 containerd[1458]: time="2024-07-02T00:25:52.793237075Z" level=info msg="StopPodSandbox for \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\"" Jul 2 00:25:52.794594 containerd[1458]: time="2024-07-02T00:25:52.794033515Z" level=info msg="StopPodSandbox for \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\"" Jul 2 00:25:53.620603 kubelet[2636]: E0702 00:25:53.620564 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.486 [INFO][4087] k8s.go 608: Cleaning up netns ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.487 [INFO][4087] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" iface="eth0" netns="/var/run/netns/cni-471a1a65-2057-22de-538d-8a2ee050c25a" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.488 [INFO][4087] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" iface="eth0" netns="/var/run/netns/cni-471a1a65-2057-22de-538d-8a2ee050c25a" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.488 [INFO][4087] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" iface="eth0" netns="/var/run/netns/cni-471a1a65-2057-22de-538d-8a2ee050c25a" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.488 [INFO][4087] k8s.go 615: Releasing IP address(es) ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.488 [INFO][4087] utils.go 188: Calico CNI releasing IP address ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.523 [INFO][4107] ipam_plugin.go 411: Releasing address using handleID ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" HandleID="k8s-pod-network.4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.523 [INFO][4107] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.523 [INFO][4107] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.570 [WARNING][4107] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" HandleID="k8s-pod-network.4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.570 [INFO][4107] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" HandleID="k8s-pod-network.4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.731 [INFO][4107] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:53.737797 containerd[1458]: 2024-07-02 00:25:53.734 [INFO][4087] k8s.go 621: Teardown processing complete. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:25:53.738539 containerd[1458]: time="2024-07-02T00:25:53.738496138Z" level=info msg="TearDown network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\" successfully" Jul 2 00:25:53.738539 containerd[1458]: time="2024-07-02T00:25:53.738538598Z" level=info msg="StopPodSandbox for \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\" returns successfully" Jul 2 00:25:53.739441 containerd[1458]: time="2024-07-02T00:25:53.739417003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7nzzg,Uid:c3d03670-d650-4edf-88af-0fb85e858e8c,Namespace:calico-system,Attempt:1,}" Jul 2 00:25:53.741234 systemd[1]: run-netns-cni\x2d471a1a65\x2d2057\x2d22de\x2d538d\x2d8a2ee050c25a.mount: Deactivated successfully. Jul 2 00:25:53.793117 containerd[1458]: time="2024-07-02T00:25:53.793051271Z" level=info msg="StopPodSandbox for \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\"" Jul 2 00:25:53.794204 containerd[1458]: time="2024-07-02T00:25:53.794118201Z" level=info msg="StopPodSandbox for \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\"" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.473 [INFO][4082] k8s.go 608: Cleaning up netns ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.473 [INFO][4082] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" iface="eth0" netns="/var/run/netns/cni-3d46109d-898e-1620-ac2f-39c1cd72136a" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.473 [INFO][4082] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" iface="eth0" netns="/var/run/netns/cni-3d46109d-898e-1620-ac2f-39c1cd72136a" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.474 [INFO][4082] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" iface="eth0" netns="/var/run/netns/cni-3d46109d-898e-1620-ac2f-39c1cd72136a" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.474 [INFO][4082] k8s.go 615: Releasing IP address(es) ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.474 [INFO][4082] utils.go 188: Calico CNI releasing IP address ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.523 [INFO][4100] ipam_plugin.go 411: Releasing address using handleID ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" HandleID="k8s-pod-network.f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.523 [INFO][4100] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.731 [INFO][4100] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.770 [WARNING][4100] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" HandleID="k8s-pod-network.f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.770 [INFO][4100] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" HandleID="k8s-pod-network.f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.808 [INFO][4100] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:53.817721 containerd[1458]: 2024-07-02 00:25:53.814 [INFO][4082] k8s.go 621: Teardown processing complete. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:25:53.818313 containerd[1458]: time="2024-07-02T00:25:53.818023680Z" level=info msg="TearDown network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\" successfully" Jul 2 00:25:53.818313 containerd[1458]: time="2024-07-02T00:25:53.818066451Z" level=info msg="StopPodSandbox for \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\" returns successfully" Jul 2 00:25:53.818683 kubelet[2636]: E0702 00:25:53.818574 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:53.820293 containerd[1458]: time="2024-07-02T00:25:53.820235159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jh2jm,Uid:0df39595-d448-474d-85a9-1efe22750b9b,Namespace:kube-system,Attempt:1,}" Jul 2 00:25:53.820681 systemd[1]: run-netns-cni\x2d3d46109d\x2d898e\x2d1620\x2dac2f\x2d39c1cd72136a.mount: Deactivated successfully. Jul 2 00:25:53.929174 systemd[1]: Started sshd@12-10.0.0.122:22-10.0.0.1:60342.service - OpenSSH per-connection server daemon (10.0.0.1:60342). Jul 2 00:25:54.154552 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 60342 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:25:54.157169 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:54.162337 systemd-logind[1441]: New session 13 of user core. Jul 2 00:25:54.173162 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.237 [INFO][4173] k8s.go 608: Cleaning up netns ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.238 [INFO][4173] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" iface="eth0" netns="/var/run/netns/cni-4766582e-11d6-7642-c412-42bde5edef06" Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.239 [INFO][4173] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" iface="eth0" netns="/var/run/netns/cni-4766582e-11d6-7642-c412-42bde5edef06" Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.239 [INFO][4173] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" iface="eth0" netns="/var/run/netns/cni-4766582e-11d6-7642-c412-42bde5edef06" Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.239 [INFO][4173] k8s.go 615: Releasing IP address(es) ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.239 [INFO][4173] utils.go 188: Calico CNI releasing IP address ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.277 [INFO][4198] ipam_plugin.go 411: Releasing address using handleID ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" HandleID="k8s-pod-network.ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.277 [INFO][4198] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.277 [INFO][4198] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.440 [WARNING][4198] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" HandleID="k8s-pod-network.ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.440 [INFO][4198] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" HandleID="k8s-pod-network.ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.454 [INFO][4198] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:54.470186 containerd[1458]: 2024-07-02 00:25:54.464 [INFO][4173] k8s.go 621: Teardown processing complete. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:25:54.475192 containerd[1458]: time="2024-07-02T00:25:54.473272513Z" level=info msg="TearDown network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\" successfully" Jul 2 00:25:54.475192 containerd[1458]: time="2024-07-02T00:25:54.473318109Z" level=info msg="StopPodSandbox for \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\" returns successfully" Jul 2 00:25:54.475410 kubelet[2636]: E0702 00:25:54.473905 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:54.477408 containerd[1458]: time="2024-07-02T00:25:54.477306434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8fw82,Uid:dda3dde4-f9bb-48b1-b9f5-9ee16ac96658,Namespace:kube-system,Attempt:1,}" Jul 2 00:25:54.503291 sshd[4188]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:54.512793 systemd[1]: sshd@12-10.0.0.122:22-10.0.0.1:60342.service: Deactivated successfully. Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.238 [INFO][4172] k8s.go 608: Cleaning up netns ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.238 [INFO][4172] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" iface="eth0" netns="/var/run/netns/cni-eb1b5a01-f357-6322-85f9-151339399cea" Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.239 [INFO][4172] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" iface="eth0" netns="/var/run/netns/cni-eb1b5a01-f357-6322-85f9-151339399cea" Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.239 [INFO][4172] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" iface="eth0" netns="/var/run/netns/cni-eb1b5a01-f357-6322-85f9-151339399cea" Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.239 [INFO][4172] k8s.go 615: Releasing IP address(es) ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.239 [INFO][4172] utils.go 188: Calico CNI releasing IP address ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.282 [INFO][4197] ipam_plugin.go 411: Releasing address using handleID ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" HandleID="k8s-pod-network.6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" Workload="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.282 [INFO][4197] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.455 [INFO][4197] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.470 [WARNING][4197] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" HandleID="k8s-pod-network.6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" Workload="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.470 [INFO][4197] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" HandleID="k8s-pod-network.6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" Workload="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.481 [INFO][4197] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:54.514308 containerd[1458]: 2024-07-02 00:25:54.498 [INFO][4172] k8s.go 621: Teardown processing complete. ContainerID="6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647" Jul 2 00:25:54.515160 containerd[1458]: time="2024-07-02T00:25:54.514617232Z" level=info msg="TearDown network for sandbox \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\" successfully" Jul 2 00:25:54.515160 containerd[1458]: time="2024-07-02T00:25:54.514656276Z" level=info msg="StopPodSandbox for \"6775716319bb5d9a7d0a3aef19657209bb1defc5119566ad0706fd44d1880647\" returns successfully" Jul 2 00:25:54.515788 containerd[1458]: time="2024-07-02T00:25:54.515663474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-656cfcb6dd-26g68,Uid:975d7404-b4fe-4099-94e1-d75e094c0eea,Namespace:calico-system,Attempt:1,}" Jul 2 00:25:54.522077 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:25:54.523240 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:25:54.524503 systemd-logind[1441]: Removed session 13. Jul 2 00:25:54.740617 systemd[1]: run-netns-cni\x2d4766582e\x2d11d6\x2d7642\x2dc412\x2d42bde5edef06.mount: Deactivated successfully. Jul 2 00:25:54.741161 systemd[1]: run-netns-cni\x2deb1b5a01\x2df357\x2d6322\x2d85f9\x2d151339399cea.mount: Deactivated successfully. Jul 2 00:25:55.040560 systemd-networkd[1386]: calief16bf90c01: Link UP Jul 2 00:25:55.040800 systemd-networkd[1386]: calief16bf90c01: Gained carrier Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.508 [INFO][4217] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.633 [INFO][4217] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7nzzg-eth0 csi-node-driver- calico-system c3d03670-d650-4edf-88af-0fb85e858e8c 886 0 2024-07-02 00:25:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-7nzzg eth0 default [] [] [kns.calico-system ksa.calico-system.default] calief16bf90c01 [] []}} ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Namespace="calico-system" Pod="csi-node-driver-7nzzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7nzzg-" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.633 [INFO][4217] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Namespace="calico-system" Pod="csi-node-driver-7nzzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.720 [INFO][4261] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" HandleID="k8s-pod-network.c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.935 [INFO][4261] ipam_plugin.go 264: Auto assigning IP ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" HandleID="k8s-pod-network.c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038a4d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7nzzg", "timestamp":"2024-07-02 00:25:54.719450708 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.936 [INFO][4261] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.936 [INFO][4261] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.936 [INFO][4261] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.938 [INFO][4261] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" host="localhost" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.944 [INFO][4261] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.950 [INFO][4261] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.952 [INFO][4261] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.954 [INFO][4261] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.954 [INFO][4261] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" host="localhost" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.956 [INFO][4261] ipam.go 1685: Creating new handle: k8s-pod-network.c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603 Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:54.960 [INFO][4261] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" host="localhost" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:55.030 [INFO][4261] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" host="localhost" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:55.030 [INFO][4261] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" host="localhost" Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:55.030 [INFO][4261] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:55.051254 containerd[1458]: 2024-07-02 00:25:55.030 [INFO][4261] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" HandleID="k8s-pod-network.c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:55.052673 containerd[1458]: 2024-07-02 00:25:55.033 [INFO][4217] k8s.go 386: Populated endpoint ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Namespace="calico-system" Pod="csi-node-driver-7nzzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7nzzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7nzzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3d03670-d650-4edf-88af-0fb85e858e8c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7nzzg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calief16bf90c01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:55.052673 containerd[1458]: 2024-07-02 00:25:55.033 [INFO][4217] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Namespace="calico-system" Pod="csi-node-driver-7nzzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:55.052673 containerd[1458]: 2024-07-02 00:25:55.033 [INFO][4217] dataplane_linux.go 68: Setting the host side veth name to calief16bf90c01 ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Namespace="calico-system" Pod="csi-node-driver-7nzzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:55.052673 containerd[1458]: 2024-07-02 00:25:55.040 [INFO][4217] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Namespace="calico-system" Pod="csi-node-driver-7nzzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:55.052673 containerd[1458]: 2024-07-02 00:25:55.040 [INFO][4217] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Namespace="calico-system" Pod="csi-node-driver-7nzzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7nzzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7nzzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3d03670-d650-4edf-88af-0fb85e858e8c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603", Pod:"csi-node-driver-7nzzg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calief16bf90c01", MAC:"9a:c9:a2:ae:1f:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:55.052673 containerd[1458]: 2024-07-02 00:25:55.048 [INFO][4217] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603" Namespace="calico-system" Pod="csi-node-driver-7nzzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:25:55.256746 systemd-networkd[1386]: calicf70da2be2e: Link UP Jul 2 00:25:55.257013 systemd-networkd[1386]: calicf70da2be2e: Gained carrier Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:54.530 [INFO][4229] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:54.636 [INFO][4229] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0 coredns-7db6d8ff4d- kube-system 0df39595-d448-474d-85a9-1efe22750b9b 884 0 2024-07-02 00:25:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-jh2jm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicf70da2be2e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jh2jm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jh2jm-" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:54.636 [INFO][4229] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jh2jm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:54.700 [INFO][4252] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" HandleID="k8s-pod-network.2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:54.938 [INFO][4252] ipam_plugin.go 264: Auto assigning IP ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" HandleID="k8s-pod-network.2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050640), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-jh2jm", "timestamp":"2024-07-02 00:25:54.700171162 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:54.938 [INFO][4252] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.030 [INFO][4252] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.030 [INFO][4252] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.199 [INFO][4252] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" host="localhost" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.203 [INFO][4252] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.207 [INFO][4252] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.209 [INFO][4252] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.211 [INFO][4252] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.211 [INFO][4252] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" host="localhost" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.213 [INFO][4252] ipam.go 1685: Creating new handle: k8s-pod-network.2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8 Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.217 [INFO][4252] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" host="localhost" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.250 [INFO][4252] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" host="localhost" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.250 [INFO][4252] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" host="localhost" Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.250 [INFO][4252] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:55.887756 containerd[1458]: 2024-07-02 00:25:55.250 [INFO][4252] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" HandleID="k8s-pod-network.2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:55.888649 containerd[1458]: 2024-07-02 00:25:55.253 [INFO][4229] k8s.go 386: Populated endpoint ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jh2jm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0df39595-d448-474d-85a9-1efe22750b9b", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-jh2jm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf70da2be2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:55.888649 containerd[1458]: 2024-07-02 00:25:55.254 [INFO][4229] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jh2jm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:55.888649 containerd[1458]: 2024-07-02 00:25:55.254 [INFO][4229] dataplane_linux.go 68: Setting the host side veth name to calicf70da2be2e ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jh2jm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:55.888649 containerd[1458]: 2024-07-02 00:25:55.255 [INFO][4229] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jh2jm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:55.888649 containerd[1458]: 2024-07-02 00:25:55.256 [INFO][4229] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jh2jm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0df39595-d448-474d-85a9-1efe22750b9b", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8", Pod:"coredns-7db6d8ff4d-jh2jm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf70da2be2e", MAC:"6a:57:f1:8c:6b:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:55.888649 containerd[1458]: 2024-07-02 00:25:55.884 [INFO][4229] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jh2jm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:25:56.573932 systemd-networkd[1386]: calicf70da2be2e: Gained IPv6LL Jul 2 00:25:56.578445 systemd-networkd[1386]: vxlan.calico: Link UP Jul 2 00:25:56.578452 systemd-networkd[1386]: vxlan.calico: Gained carrier Jul 2 00:25:56.637980 systemd-networkd[1386]: calief16bf90c01: Gained IPv6LL Jul 2 00:25:56.889936 containerd[1458]: time="2024-07-02T00:25:56.888374060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:56.889936 containerd[1458]: time="2024-07-02T00:25:56.889898817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:56.889936 containerd[1458]: time="2024-07-02T00:25:56.889936418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:56.889936 containerd[1458]: time="2024-07-02T00:25:56.889962537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:56.932222 systemd[1]: Started cri-containerd-c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603.scope - libcontainer container c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603. Jul 2 00:25:56.947567 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:25:56.955147 containerd[1458]: time="2024-07-02T00:25:56.954986647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:56.955147 containerd[1458]: time="2024-07-02T00:25:56.955081948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:56.955147 containerd[1458]: time="2024-07-02T00:25:56.955110151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:56.955147 containerd[1458]: time="2024-07-02T00:25:56.955142192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:56.966762 containerd[1458]: time="2024-07-02T00:25:56.966720593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7nzzg,Uid:c3d03670-d650-4edf-88af-0fb85e858e8c,Namespace:calico-system,Attempt:1,} returns sandbox id \"c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603\"" Jul 2 00:25:56.972097 containerd[1458]: time="2024-07-02T00:25:56.970997952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:25:56.984028 systemd[1]: Started cri-containerd-2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8.scope - libcontainer container 2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8. Jul 2 00:25:57.001018 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:25:57.034778 containerd[1458]: time="2024-07-02T00:25:57.034726917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jh2jm,Uid:0df39595-d448-474d-85a9-1efe22750b9b,Namespace:kube-system,Attempt:1,} returns sandbox id \"2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8\"" Jul 2 00:25:57.035627 kubelet[2636]: E0702 00:25:57.035581 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:57.039984 containerd[1458]: time="2024-07-02T00:25:57.039940796Z" level=info msg="CreateContainer within sandbox \"2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:25:58.301055 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Jul 2 00:25:58.618890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3307230321.mount: Deactivated successfully. Jul 2 00:25:58.678241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849132124.mount: Deactivated successfully. Jul 2 00:25:58.779449 containerd[1458]: time="2024-07-02T00:25:58.779385433Z" level=info msg="StopPodSandbox for \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\"" Jul 2 00:25:58.779990 containerd[1458]: time="2024-07-02T00:25:58.779508245Z" level=info msg="TearDown network for sandbox \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\" successfully" Jul 2 00:25:58.779990 containerd[1458]: time="2024-07-02T00:25:58.779522412Z" level=info msg="StopPodSandbox for \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\" returns successfully" Jul 2 00:25:58.780204 containerd[1458]: time="2024-07-02T00:25:58.780155591Z" level=info msg="RemovePodSandbox for \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\"" Jul 2 00:25:58.782996 containerd[1458]: time="2024-07-02T00:25:58.782970498Z" level=info msg="Forcibly stopping sandbox \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\"" Jul 2 00:25:58.795607 containerd[1458]: time="2024-07-02T00:25:58.783053025Z" level=info msg="TearDown network for sandbox \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\" successfully" Jul 2 00:25:59.148008 systemd-networkd[1386]: cali99ed898119a: Link UP Jul 2 00:25:59.148826 systemd-networkd[1386]: cali99ed898119a: Gained carrier Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.581 [INFO][4583] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0 coredns-7db6d8ff4d- kube-system dda3dde4-f9bb-48b1-b9f5-9ee16ac96658 896 0 2024-07-02 00:25:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-8fw82 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali99ed898119a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8fw82" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8fw82-" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.581 [INFO][4583] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8fw82" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.616 [INFO][4601] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" HandleID="k8s-pod-network.b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.627 [INFO][4601] ipam_plugin.go 264: Auto assigning IP ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" HandleID="k8s-pod-network.b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e60a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-8fw82", "timestamp":"2024-07-02 00:25:58.616775492 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.627 [INFO][4601] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.627 [INFO][4601] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.627 [INFO][4601] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.629 [INFO][4601] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" host="localhost" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.633 [INFO][4601] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.675 [INFO][4601] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.677 [INFO][4601] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.680 [INFO][4601] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.681 [INFO][4601] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" host="localhost" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.683 [INFO][4601] ipam.go 1685: Creating new handle: k8s-pod-network.b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9 Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:58.687 [INFO][4601] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" host="localhost" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:59.141 [INFO][4601] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" host="localhost" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:59.141 [INFO][4601] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" host="localhost" Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:59.141 [INFO][4601] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:59.163842 containerd[1458]: 2024-07-02 00:25:59.141 [INFO][4601] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" HandleID="k8s-pod-network.b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:59.164807 containerd[1458]: 2024-07-02 00:25:59.145 [INFO][4583] k8s.go 386: Populated endpoint ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8fw82" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dda3dde4-f9bb-48b1-b9f5-9ee16ac96658", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-8fw82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99ed898119a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:59.164807 containerd[1458]: 2024-07-02 00:25:59.145 [INFO][4583] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8fw82" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:59.164807 containerd[1458]: 2024-07-02 00:25:59.146 [INFO][4583] dataplane_linux.go 68: Setting the host side veth name to cali99ed898119a ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8fw82" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:59.164807 containerd[1458]: 2024-07-02 00:25:59.149 [INFO][4583] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8fw82" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:59.164807 containerd[1458]: 2024-07-02 00:25:59.150 [INFO][4583] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8fw82" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dda3dde4-f9bb-48b1-b9f5-9ee16ac96658", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9", Pod:"coredns-7db6d8ff4d-8fw82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99ed898119a", MAC:"16:b2:93:c5:4f:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:59.164807 containerd[1458]: 2024-07-02 00:25:59.159 [INFO][4583] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8fw82" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:25:59.522764 systemd[1]: Started sshd@13-10.0.0.122:22-10.0.0.1:38486.service - OpenSSH per-connection server daemon (10.0.0.1:38486). Jul 2 00:25:59.574649 containerd[1458]: time="2024-07-02T00:25:59.574574491Z" level=info msg="CreateContainer within sandbox \"2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12fd2dabcea895d8944956ab849daa50404ea3502e54e683fc811e8561d89eb6\"" Jul 2 00:25:59.575742 containerd[1458]: time="2024-07-02T00:25:59.575698869Z" level=info msg="StartContainer for \"12fd2dabcea895d8944956ab849daa50404ea3502e54e683fc811e8561d89eb6\"" Jul 2 00:25:59.593933 containerd[1458]: time="2024-07-02T00:25:59.593630831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:59.593933 containerd[1458]: time="2024-07-02T00:25:59.593693149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:59.593933 containerd[1458]: time="2024-07-02T00:25:59.593711653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:59.593933 containerd[1458]: time="2024-07-02T00:25:59.593725339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:59.624123 systemd[1]: Started cri-containerd-12fd2dabcea895d8944956ab849daa50404ea3502e54e683fc811e8561d89eb6.scope - libcontainer container 12fd2dabcea895d8944956ab849daa50404ea3502e54e683fc811e8561d89eb6. Jul 2 00:25:59.639023 systemd[1]: Started cri-containerd-b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9.scope - libcontainer container b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9. Jul 2 00:25:59.642213 sshd[4646]: Accepted publickey for core from 10.0.0.1 port 38486 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:25:59.644232 sshd[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:59.650634 systemd-logind[1441]: New session 14 of user core. Jul 2 00:25:59.655035 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:25:59.664021 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:25:59.799277 systemd-networkd[1386]: cali463cf44c876: Link UP Jul 2 00:25:59.800874 systemd-networkd[1386]: cali463cf44c876: Gained carrier Jul 2 00:25:59.876325 containerd[1458]: time="2024-07-02T00:25:59.876131147Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:59.903033 containerd[1458]: time="2024-07-02T00:25:59.902944203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8fw82,Uid:dda3dde4-f9bb-48b1-b9f5-9ee16ac96658,Namespace:kube-system,Attempt:1,} returns sandbox id \"b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9\"" Jul 2 00:25:59.903224 containerd[1458]: time="2024-07-02T00:25:59.903064902Z" level=info msg="StartContainer for \"12fd2dabcea895d8944956ab849daa50404ea3502e54e683fc811e8561d89eb6\" returns successfully" Jul 2 00:25:59.908777 kubelet[2636]: E0702 00:25:59.908725 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:25:59.911537 containerd[1458]: time="2024-07-02T00:25:59.911491186Z" level=info msg="RemovePodSandbox \"231de57fcefb0afbc9e89535f3fd9999b646365ec98a5223017bd1127b867484\" returns successfully" Jul 2 00:25:59.912184 containerd[1458]: time="2024-07-02T00:25:59.911794229Z" level=info msg="CreateContainer within sandbox \"b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:25:59.912184 containerd[1458]: time="2024-07-02T00:25:59.911934044Z" level=info msg="StopPodSandbox for \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\"" Jul 2 00:25:59.915526 containerd[1458]: time="2024-07-02T00:25:59.912054772Z" level=info msg="TearDown network for sandbox \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\" successfully" Jul 2 00:25:59.915610 containerd[1458]: time="2024-07-02T00:25:59.915526512Z" level=info msg="StopPodSandbox for \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\" returns successfully" Jul 2 00:25:59.915947 containerd[1458]: time="2024-07-02T00:25:59.915904016Z" level=info msg="RemovePodSandbox for \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\"" Jul 2 00:25:59.916015 containerd[1458]: time="2024-07-02T00:25:59.915955313Z" level=info msg="Forcibly stopping sandbox \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\"" Jul 2 00:25:59.916079 containerd[1458]: time="2024-07-02T00:25:59.916036297Z" level=info msg="TearDown network for sandbox \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\" successfully" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.478 [INFO][4614] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0 calico-kube-controllers-656cfcb6dd- calico-system 975d7404-b4fe-4099-94e1-d75e094c0eea 895 0 2024-07-02 00:25:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:656cfcb6dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-656cfcb6dd-26g68 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali463cf44c876 [] []}} ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Namespace="calico-system" Pod="calico-kube-controllers-656cfcb6dd-26g68" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.478 [INFO][4614] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Namespace="calico-system" Pod="calico-kube-controllers-656cfcb6dd-26g68" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.564 [INFO][4640] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" HandleID="k8s-pod-network.46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Workload="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.724 [INFO][4640] ipam_plugin.go 264: Auto assigning IP ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" HandleID="k8s-pod-network.46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Workload="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027fe80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-656cfcb6dd-26g68", "timestamp":"2024-07-02 00:25:59.564501621 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.724 [INFO][4640] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.724 [INFO][4640] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.724 [INFO][4640] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.726 [INFO][4640] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" host="localhost" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.731 [INFO][4640] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.739 [INFO][4640] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.742 [INFO][4640] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.749 [INFO][4640] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.749 [INFO][4640] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" host="localhost" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.752 [INFO][4640] ipam.go 1685: Creating new handle: k8s-pod-network.46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35 Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.759 [INFO][4640] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" host="localhost" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.791 [INFO][4640] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" host="localhost" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.791 [INFO][4640] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" host="localhost" Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.791 [INFO][4640] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:59.994351 containerd[1458]: 2024-07-02 00:25:59.791 [INFO][4640] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" HandleID="k8s-pod-network.46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Workload="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:25:59.995081 containerd[1458]: 2024-07-02 00:25:59.795 [INFO][4614] k8s.go 386: Populated endpoint ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Namespace="calico-system" Pod="calico-kube-controllers-656cfcb6dd-26g68" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0", GenerateName:"calico-kube-controllers-656cfcb6dd-", Namespace:"calico-system", SelfLink:"", UID:"975d7404-b4fe-4099-94e1-d75e094c0eea", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"656cfcb6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-656cfcb6dd-26g68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali463cf44c876", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:59.995081 containerd[1458]: 2024-07-02 00:25:59.796 [INFO][4614] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Namespace="calico-system" Pod="calico-kube-controllers-656cfcb6dd-26g68" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:25:59.995081 containerd[1458]: 2024-07-02 00:25:59.796 [INFO][4614] dataplane_linux.go 68: Setting the host side veth name to cali463cf44c876 ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Namespace="calico-system" Pod="calico-kube-controllers-656cfcb6dd-26g68" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:25:59.995081 containerd[1458]: 2024-07-02 00:25:59.801 [INFO][4614] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Namespace="calico-system" Pod="calico-kube-controllers-656cfcb6dd-26g68" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:25:59.995081 containerd[1458]: 2024-07-02 00:25:59.801 [INFO][4614] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Namespace="calico-system" Pod="calico-kube-controllers-656cfcb6dd-26g68" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0", GenerateName:"calico-kube-controllers-656cfcb6dd-", Namespace:"calico-system", SelfLink:"", UID:"975d7404-b4fe-4099-94e1-d75e094c0eea", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"656cfcb6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35", Pod:"calico-kube-controllers-656cfcb6dd-26g68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali463cf44c876", MAC:"ce:6d:47:08:29:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:59.995081 containerd[1458]: 2024-07-02 00:25:59.991 [INFO][4614] k8s.go 500: Wrote updated endpoint to datastore ContainerID="46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35" Namespace="calico-system" Pod="calico-kube-controllers-656cfcb6dd-26g68" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--656cfcb6dd--26g68-eth0" Jul 2 00:26:00.097909 sshd[4646]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:00.104840 systemd[1]: sshd@13-10.0.0.122:22-10.0.0.1:38486.service: Deactivated successfully. Jul 2 00:26:00.108050 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:26:00.108924 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:26:00.110189 systemd-logind[1441]: Removed session 14. Jul 2 00:26:00.646893 kubelet[2636]: E0702 00:26:00.646683 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:00.773172 containerd[1458]: time="2024-07-02T00:26:00.773020423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:26:00.773172 containerd[1458]: time="2024-07-02T00:26:00.773109471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:00.773172 containerd[1458]: time="2024-07-02T00:26:00.773128567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:26:00.773172 containerd[1458]: time="2024-07-02T00:26:00.773140420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:00.797045 systemd[1]: Started cri-containerd-46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35.scope - libcontainer container 46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35. Jul 2 00:26:00.811316 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:26:00.841128 containerd[1458]: time="2024-07-02T00:26:00.841062021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-656cfcb6dd-26g68,Uid:975d7404-b4fe-4099-94e1-d75e094c0eea,Namespace:calico-system,Attempt:1,} returns sandbox id \"46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35\"" Jul 2 00:26:00.864009 systemd-networkd[1386]: cali99ed898119a: Gained IPv6LL Jul 2 00:26:00.887712 containerd[1458]: time="2024-07-02T00:26:00.887641645Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:26:00.888244 containerd[1458]: time="2024-07-02T00:26:00.887738007Z" level=info msg="RemovePodSandbox \"64ffb7b38a15a4e9c28e63a6b24067038a926552e6b480d2f1d6d83e0e5a95bb\" returns successfully" Jul 2 00:26:00.888371 containerd[1458]: time="2024-07-02T00:26:00.888329746Z" level=info msg="StopPodSandbox for \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\"" Jul 2 00:26:01.055166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170600256.mount: Deactivated successfully. Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.025 [WARNING][4809] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7nzzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3d03670-d650-4edf-88af-0fb85e858e8c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603", Pod:"csi-node-driver-7nzzg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calief16bf90c01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.026 [INFO][4809] k8s.go 608: Cleaning up netns ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.026 [INFO][4809] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" iface="eth0" netns="" Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.026 [INFO][4809] k8s.go 615: Releasing IP address(es) ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.026 [INFO][4809] utils.go 188: Calico CNI releasing IP address ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.062 [INFO][4817] ipam_plugin.go 411: Releasing address using handleID ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" HandleID="k8s-pod-network.4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.062 [INFO][4817] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.062 [INFO][4817] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.068 [WARNING][4817] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" HandleID="k8s-pod-network.4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.068 [INFO][4817] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" HandleID="k8s-pod-network.4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.071 [INFO][4817] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:26:01.078669 containerd[1458]: 2024-07-02 00:26:01.074 [INFO][4809] k8s.go 621: Teardown processing complete. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:26:01.079481 containerd[1458]: time="2024-07-02T00:26:01.078701676Z" level=info msg="TearDown network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\" successfully" Jul 2 00:26:01.079481 containerd[1458]: time="2024-07-02T00:26:01.078734910Z" level=info msg="StopPodSandbox for \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\" returns successfully" Jul 2 00:26:01.079481 containerd[1458]: time="2024-07-02T00:26:01.079356565Z" level=info msg="RemovePodSandbox for \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\"" Jul 2 00:26:01.079481 containerd[1458]: time="2024-07-02T00:26:01.079417029Z" level=info msg="Forcibly stopping sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\"" Jul 2 00:26:01.501120 systemd-networkd[1386]: cali463cf44c876: Gained IPv6LL Jul 2 00:26:01.652156 kubelet[2636]: E0702 00:26:01.652071 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.277 [WARNING][4840] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7nzzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3d03670-d650-4edf-88af-0fb85e858e8c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603", Pod:"csi-node-driver-7nzzg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calief16bf90c01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.277 [INFO][4840] k8s.go 608: Cleaning up netns ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.277 [INFO][4840] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" iface="eth0" netns="" Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.277 [INFO][4840] k8s.go 615: Releasing IP address(es) ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.277 [INFO][4840] utils.go 188: Calico CNI releasing IP address ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.373 [INFO][4848] ipam_plugin.go 411: Releasing address using handleID ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" HandleID="k8s-pod-network.4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.373 [INFO][4848] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.373 [INFO][4848] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.508 [WARNING][4848] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" HandleID="k8s-pod-network.4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.508 [INFO][4848] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" HandleID="k8s-pod-network.4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Workload="localhost-k8s-csi--node--driver--7nzzg-eth0" Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.896 [INFO][4848] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:26:01.904914 containerd[1458]: 2024-07-02 00:26:01.900 [INFO][4840] k8s.go 621: Teardown processing complete. ContainerID="4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a" Jul 2 00:26:01.904914 containerd[1458]: time="2024-07-02T00:26:01.903871590Z" level=info msg="TearDown network for sandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\" successfully" Jul 2 00:26:01.986162 kubelet[2636]: I0702 00:26:01.986054 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jh2jm" podStartSLOduration=47.98603031 podStartE2EDuration="47.98603031s" podCreationTimestamp="2024-07-02 00:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:26:00.685812925 +0000 UTC m=+61.981784381" watchObservedRunningTime="2024-07-02 00:26:01.98603031 +0000 UTC m=+63.282001755" Jul 2 00:26:02.401479 containerd[1458]: time="2024-07-02T00:26:02.401396523Z" level=info msg="CreateContainer within sandbox \"b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e298d0d8b8bdb1a3079036fa0ab06011c42ea2f911c738028563b9525c5bd2ce\"" Jul 2 00:26:02.403177 containerd[1458]: time="2024-07-02T00:26:02.402447761Z" level=info msg="StartContainer for \"e298d0d8b8bdb1a3079036fa0ab06011c42ea2f911c738028563b9525c5bd2ce\"" Jul 2 00:26:02.447089 systemd[1]: Started cri-containerd-e298d0d8b8bdb1a3079036fa0ab06011c42ea2f911c738028563b9525c5bd2ce.scope - libcontainer container e298d0d8b8bdb1a3079036fa0ab06011c42ea2f911c738028563b9525c5bd2ce. Jul 2 00:26:02.557074 containerd[1458]: time="2024-07-02T00:26:02.557014207Z" level=info msg="StartContainer for \"e298d0d8b8bdb1a3079036fa0ab06011c42ea2f911c738028563b9525c5bd2ce\" returns successfully" Jul 2 00:26:02.568415 containerd[1458]: time="2024-07-02T00:26:02.568274304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:26:02.568415 containerd[1458]: time="2024-07-02T00:26:02.568374352Z" level=info msg="RemovePodSandbox \"4f6909c1fc0be93825a10cd3fb8608be3f41cc7fe58aaa6411294738eb03052a\" returns successfully" Jul 2 00:26:02.569076 containerd[1458]: time="2024-07-02T00:26:02.569038328Z" level=info msg="StopPodSandbox for \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\"" Jul 2 00:26:02.658739 kubelet[2636]: E0702 00:26:02.657596 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:02.667732 kubelet[2636]: E0702 00:26:02.667006 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:02.715295 kubelet[2636]: I0702 00:26:02.714919 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8fw82" podStartSLOduration=48.714893079 podStartE2EDuration="48.714893079s" podCreationTimestamp="2024-07-02 00:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:26:02.714699232 +0000 UTC m=+64.010670687" watchObservedRunningTime="2024-07-02 00:26:02.714893079 +0000 UTC m=+64.010864524" Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.657 [WARNING][4919] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0df39595-d448-474d-85a9-1efe22750b9b", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8", Pod:"coredns-7db6d8ff4d-jh2jm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf70da2be2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.658 [INFO][4919] k8s.go 608: Cleaning up netns ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.658 [INFO][4919] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" iface="eth0" netns="" Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.658 [INFO][4919] k8s.go 615: Releasing IP address(es) ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.659 [INFO][4919] utils.go 188: Calico CNI releasing IP address ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.696 [INFO][4926] ipam_plugin.go 411: Releasing address using handleID ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" HandleID="k8s-pod-network.f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.696 [INFO][4926] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.696 [INFO][4926] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.750 [WARNING][4926] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" HandleID="k8s-pod-network.f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.750 [INFO][4926] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" HandleID="k8s-pod-network.f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.913 [INFO][4926] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:26:02.919704 containerd[1458]: 2024-07-02 00:26:02.916 [INFO][4919] k8s.go 621: Teardown processing complete. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:26:02.919704 containerd[1458]: time="2024-07-02T00:26:02.918929806Z" level=info msg="TearDown network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\" successfully" Jul 2 00:26:02.919704 containerd[1458]: time="2024-07-02T00:26:02.918959323Z" level=info msg="StopPodSandbox for \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\" returns successfully" Jul 2 00:26:02.920557 containerd[1458]: time="2024-07-02T00:26:02.919720883Z" level=info msg="RemovePodSandbox for \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\"" Jul 2 00:26:02.920557 containerd[1458]: time="2024-07-02T00:26:02.919786737Z" level=info msg="Forcibly stopping sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\"" Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.123 [WARNING][4950] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0df39595-d448-474d-85a9-1efe22750b9b", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d8b5b7af4dede028a36bed9158ba4f709b0c6990a9015bd11b3d688c781ceb8", Pod:"coredns-7db6d8ff4d-jh2jm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf70da2be2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.124 [INFO][4950] k8s.go 608: Cleaning up netns ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.124 [INFO][4950] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" iface="eth0" netns="" Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.124 [INFO][4950] k8s.go 615: Releasing IP address(es) ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.124 [INFO][4950] utils.go 188: Calico CNI releasing IP address ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.148 [INFO][4958] ipam_plugin.go 411: Releasing address using handleID ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" HandleID="k8s-pod-network.f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.148 [INFO][4958] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.148 [INFO][4958] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.153 [WARNING][4958] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" HandleID="k8s-pod-network.f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.154 [INFO][4958] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" HandleID="k8s-pod-network.f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Workload="localhost-k8s-coredns--7db6d8ff4d--jh2jm-eth0" Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.155 [INFO][4958] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:26:03.160801 containerd[1458]: 2024-07-02 00:26:03.158 [INFO][4950] k8s.go 621: Teardown processing complete. ContainerID="f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8" Jul 2 00:26:03.161391 containerd[1458]: time="2024-07-02T00:26:03.160844414Z" level=info msg="TearDown network for sandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\" successfully" Jul 2 00:26:03.225621 containerd[1458]: time="2024-07-02T00:26:03.225371021Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:26:03.225621 containerd[1458]: time="2024-07-02T00:26:03.225491139Z" level=info msg="RemovePodSandbox \"f499d511139dbc238305391e9d33fc3fda5b5a587bfdbb5b00c77c6d373208e8\" returns successfully" Jul 2 00:26:03.601565 containerd[1458]: time="2024-07-02T00:26:03.601390121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:03.607894 containerd[1458]: time="2024-07-02T00:26:03.606763481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:26:03.608531 containerd[1458]: time="2024-07-02T00:26:03.608458737Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:03.617031 containerd[1458]: time="2024-07-02T00:26:03.616606083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:03.618096 containerd[1458]: time="2024-07-02T00:26:03.618036879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 6.646845671s" Jul 2 00:26:03.618096 containerd[1458]: time="2024-07-02T00:26:03.618088847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:26:03.621666 containerd[1458]: time="2024-07-02T00:26:03.621605748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:26:03.622974 containerd[1458]: time="2024-07-02T00:26:03.622568368Z" level=info msg="CreateContainer within sandbox \"c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:26:03.671471 kubelet[2636]: E0702 00:26:03.671432 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:03.672138 containerd[1458]: time="2024-07-02T00:26:03.672012305Z" level=info msg="CreateContainer within sandbox \"c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"66642e4add72732522547549f718d677d61edb0a08adc8bb41d9a88bbfaaec7b\"" Jul 2 00:26:03.672466 containerd[1458]: time="2024-07-02T00:26:03.672346096Z" level=info msg="StartContainer for \"66642e4add72732522547549f718d677d61edb0a08adc8bb41d9a88bbfaaec7b\"" Jul 2 00:26:03.718619 systemd[1]: Started cri-containerd-66642e4add72732522547549f718d677d61edb0a08adc8bb41d9a88bbfaaec7b.scope - libcontainer container 66642e4add72732522547549f718d677d61edb0a08adc8bb41d9a88bbfaaec7b. Jul 2 00:26:03.896965 containerd[1458]: time="2024-07-02T00:26:03.896899529Z" level=info msg="StartContainer for \"66642e4add72732522547549f718d677d61edb0a08adc8bb41d9a88bbfaaec7b\" returns successfully" Jul 2 00:26:04.674128 kubelet[2636]: E0702 00:26:04.674088 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:05.114789 systemd[1]: Started sshd@14-10.0.0.122:22-10.0.0.1:38488.service - OpenSSH per-connection server daemon (10.0.0.1:38488). Jul 2 00:26:05.182191 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 38488 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:05.184528 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:05.190486 systemd-logind[1441]: New session 15 of user core. Jul 2 00:26:05.196236 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:26:05.683728 sshd[5011]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:05.695630 systemd[1]: sshd@14-10.0.0.122:22-10.0.0.1:38488.service: Deactivated successfully. Jul 2 00:26:05.698373 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:26:05.701509 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:26:05.713307 systemd[1]: Started sshd@15-10.0.0.122:22-10.0.0.1:38490.service - OpenSSH per-connection server daemon (10.0.0.1:38490). Jul 2 00:26:05.714328 systemd-logind[1441]: Removed session 15. Jul 2 00:26:05.743878 sshd[5027]: Accepted publickey for core from 10.0.0.1 port 38490 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:05.745547 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:05.749902 systemd-logind[1441]: New session 16 of user core. Jul 2 00:26:05.753981 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:26:06.124000 sshd[5027]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:06.137367 systemd[1]: sshd@15-10.0.0.122:22-10.0.0.1:38490.service: Deactivated successfully. Jul 2 00:26:06.145263 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:26:06.147772 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:26:06.153332 systemd[1]: Started sshd@16-10.0.0.122:22-10.0.0.1:38500.service - OpenSSH per-connection server daemon (10.0.0.1:38500). Jul 2 00:26:06.155216 systemd-logind[1441]: Removed session 16. Jul 2 00:26:06.188796 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 38500 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:06.191468 sshd[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:06.198089 systemd-logind[1441]: New session 17 of user core. Jul 2 00:26:06.204077 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:26:06.508899 sshd[5044]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:06.513202 systemd[1]: sshd@16-10.0.0.122:22-10.0.0.1:38500.service: Deactivated successfully. Jul 2 00:26:06.515684 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:26:06.516549 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:26:06.517564 systemd-logind[1441]: Removed session 17. Jul 2 00:26:07.201733 containerd[1458]: time="2024-07-02T00:26:07.201608447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:07.218422 containerd[1458]: time="2024-07-02T00:26:07.218282499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:26:07.235600 containerd[1458]: time="2024-07-02T00:26:07.235510477Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:07.258121 containerd[1458]: time="2024-07-02T00:26:07.258024946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:07.258997 containerd[1458]: time="2024-07-02T00:26:07.258926500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.637264786s" Jul 2 00:26:07.258997 containerd[1458]: time="2024-07-02T00:26:07.258988687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:26:07.260126 containerd[1458]: time="2024-07-02T00:26:07.260099807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:26:07.269940 containerd[1458]: time="2024-07-02T00:26:07.269865163Z" level=info msg="CreateContainer within sandbox \"46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:26:07.556260 containerd[1458]: time="2024-07-02T00:26:07.556071660Z" level=info msg="CreateContainer within sandbox \"46124d694a08d4cbe38fd0868a2d3ce3b632dbcff244dc90cce416d936668a35\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7237c4558cacf45a0a0e068fb0f433e68b9575ddb232ec6e5e29be70c219b654\"" Jul 2 00:26:07.556799 containerd[1458]: time="2024-07-02T00:26:07.556714394Z" level=info msg="StartContainer for \"7237c4558cacf45a0a0e068fb0f433e68b9575ddb232ec6e5e29be70c219b654\"" Jul 2 00:26:07.593096 systemd[1]: Started cri-containerd-7237c4558cacf45a0a0e068fb0f433e68b9575ddb232ec6e5e29be70c219b654.scope - libcontainer container 7237c4558cacf45a0a0e068fb0f433e68b9575ddb232ec6e5e29be70c219b654. Jul 2 00:26:07.680213 containerd[1458]: time="2024-07-02T00:26:07.680161020Z" level=info msg="StartContainer for \"7237c4558cacf45a0a0e068fb0f433e68b9575ddb232ec6e5e29be70c219b654\" returns successfully" Jul 2 00:26:07.718755 kubelet[2636]: I0702 00:26:07.718655 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-656cfcb6dd-26g68" podStartSLOduration=37.30133418 podStartE2EDuration="43.718633668s" podCreationTimestamp="2024-07-02 00:25:24 +0000 UTC" firstStartedPulling="2024-07-02 00:26:00.842579482 +0000 UTC m=+62.138550927" lastFinishedPulling="2024-07-02 00:26:07.25987897 +0000 UTC m=+68.555850415" observedRunningTime="2024-07-02 00:26:07.718118504 +0000 UTC m=+69.014089969" watchObservedRunningTime="2024-07-02 00:26:07.718633668 +0000 UTC m=+69.014605133" Jul 2 00:26:10.793220 kubelet[2636]: E0702 00:26:10.793165 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:11.521249 systemd[1]: Started sshd@17-10.0.0.122:22-10.0.0.1:34044.service - OpenSSH per-connection server daemon (10.0.0.1:34044). Jul 2 00:26:11.637184 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 34044 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:11.639165 sshd[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:11.643727 systemd-logind[1441]: New session 18 of user core. Jul 2 00:26:11.654077 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:26:11.793478 containerd[1458]: time="2024-07-02T00:26:11.793276540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:11.812117 sshd[5176]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:11.815018 containerd[1458]: time="2024-07-02T00:26:11.814955418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:26:11.817397 systemd[1]: sshd@17-10.0.0.122:22-10.0.0.1:34044.service: Deactivated successfully. Jul 2 00:26:11.819804 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:26:11.820663 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:26:11.821898 systemd-logind[1441]: Removed session 18. Jul 2 00:26:11.832424 containerd[1458]: time="2024-07-02T00:26:11.832333132Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:11.847941 containerd[1458]: time="2024-07-02T00:26:11.847788085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:11.848643 containerd[1458]: time="2024-07-02T00:26:11.848585221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 4.588454074s" Jul 2 00:26:11.848643 containerd[1458]: time="2024-07-02T00:26:11.848637078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:26:11.851341 containerd[1458]: time="2024-07-02T00:26:11.851285852Z" level=info msg="CreateContainer within sandbox \"c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:26:12.042537 containerd[1458]: time="2024-07-02T00:26:12.042453714Z" level=info msg="CreateContainer within sandbox \"c366daaff0767dd19011bea0c0be8dfa85617193523fef50bb9e0add6ec17603\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7aec8dfb10e6df91ef1e3cf7701486062d15f8eae5142a7c2261eb0ca72c0462\"" Jul 2 00:26:12.043204 containerd[1458]: time="2024-07-02T00:26:12.043162211Z" level=info msg="StartContainer for \"7aec8dfb10e6df91ef1e3cf7701486062d15f8eae5142a7c2261eb0ca72c0462\"" Jul 2 00:26:12.088041 systemd[1]: Started cri-containerd-7aec8dfb10e6df91ef1e3cf7701486062d15f8eae5142a7c2261eb0ca72c0462.scope - libcontainer container 7aec8dfb10e6df91ef1e3cf7701486062d15f8eae5142a7c2261eb0ca72c0462. Jul 2 00:26:12.164062 containerd[1458]: time="2024-07-02T00:26:12.164004459Z" level=info msg="StartContainer for \"7aec8dfb10e6df91ef1e3cf7701486062d15f8eae5142a7c2261eb0ca72c0462\" returns successfully" Jul 2 00:26:12.721922 kubelet[2636]: I0702 00:26:12.721779 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7nzzg" podStartSLOduration=35.842468608 podStartE2EDuration="50.721751663s" podCreationTimestamp="2024-07-02 00:25:22 +0000 UTC" firstStartedPulling="2024-07-02 00:25:56.970304108 +0000 UTC m=+58.266275553" lastFinishedPulling="2024-07-02 00:26:11.849587163 +0000 UTC m=+73.145558608" observedRunningTime="2024-07-02 00:26:12.711439587 +0000 UTC m=+74.007411032" watchObservedRunningTime="2024-07-02 00:26:12.721751663 +0000 UTC m=+74.017723108" Jul 2 00:26:12.879836 kubelet[2636]: I0702 00:26:12.879776 2636 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:26:12.879836 kubelet[2636]: I0702 00:26:12.879828 2636 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:26:16.837495 systemd[1]: Started sshd@18-10.0.0.122:22-10.0.0.1:34048.service - OpenSSH per-connection server daemon (10.0.0.1:34048). Jul 2 00:26:16.872607 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 34048 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:16.874730 sshd[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:16.880293 systemd-logind[1441]: New session 19 of user core. Jul 2 00:26:16.892102 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:26:17.002431 sshd[5257]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:17.007537 systemd[1]: sshd@18-10.0.0.122:22-10.0.0.1:34048.service: Deactivated successfully. Jul 2 00:26:17.010735 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:26:17.011545 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:26:17.012508 systemd-logind[1441]: Removed session 19. Jul 2 00:26:22.019113 systemd[1]: Started sshd@19-10.0.0.122:22-10.0.0.1:34672.service - OpenSSH per-connection server daemon (10.0.0.1:34672). Jul 2 00:26:22.073219 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 34672 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:22.075584 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:22.083069 systemd-logind[1441]: New session 20 of user core. Jul 2 00:26:22.092160 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:26:22.232593 sshd[5283]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:22.237765 systemd[1]: sshd@19-10.0.0.122:22-10.0.0.1:34672.service: Deactivated successfully. Jul 2 00:26:22.240417 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:26:22.241182 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:26:22.242114 systemd-logind[1441]: Removed session 20. Jul 2 00:26:23.793721 kubelet[2636]: E0702 00:26:23.793651 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:27.249878 systemd[1]: Started sshd@20-10.0.0.122:22-10.0.0.1:34688.service - OpenSSH per-connection server daemon (10.0.0.1:34688). Jul 2 00:26:27.290696 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 34688 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:27.292604 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:27.297286 systemd-logind[1441]: New session 21 of user core. Jul 2 00:26:27.308141 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:26:27.439763 sshd[5297]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:27.445376 systemd[1]: sshd@20-10.0.0.122:22-10.0.0.1:34688.service: Deactivated successfully. Jul 2 00:26:27.448373 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:26:27.449172 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:26:27.450291 systemd-logind[1441]: Removed session 21. Jul 2 00:26:32.452944 systemd[1]: Started sshd@21-10.0.0.122:22-10.0.0.1:46630.service - OpenSSH per-connection server daemon (10.0.0.1:46630). Jul 2 00:26:32.497164 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 46630 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:32.499486 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:32.505312 systemd-logind[1441]: New session 22 of user core. Jul 2 00:26:32.514090 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:26:32.641344 sshd[5317]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:32.651227 systemd[1]: sshd@21-10.0.0.122:22-10.0.0.1:46630.service: Deactivated successfully. Jul 2 00:26:32.653385 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:26:32.655214 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:26:32.661251 systemd[1]: Started sshd@22-10.0.0.122:22-10.0.0.1:46634.service - OpenSSH per-connection server daemon (10.0.0.1:46634). Jul 2 00:26:32.662688 systemd-logind[1441]: Removed session 22. Jul 2 00:26:32.698085 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 46634 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:32.699917 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:32.706971 systemd-logind[1441]: New session 23 of user core. Jul 2 00:26:32.711013 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:26:32.795557 kubelet[2636]: E0702 00:26:32.795497 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:33.021114 sshd[5331]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:33.031448 systemd[1]: sshd@22-10.0.0.122:22-10.0.0.1:46634.service: Deactivated successfully. Jul 2 00:26:33.034044 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:26:33.036436 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:26:33.056379 systemd[1]: Started sshd@23-10.0.0.122:22-10.0.0.1:46642.service - OpenSSH per-connection server daemon (10.0.0.1:46642). Jul 2 00:26:33.057578 systemd-logind[1441]: Removed session 23. Jul 2 00:26:33.089196 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 46642 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:33.091175 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:33.096069 systemd-logind[1441]: New session 24 of user core. Jul 2 00:26:33.105042 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:26:34.739618 sshd[5343]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:34.753079 systemd[1]: sshd@23-10.0.0.122:22-10.0.0.1:46642.service: Deactivated successfully. Jul 2 00:26:34.757621 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:26:34.761212 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:26:34.770236 systemd[1]: Started sshd@24-10.0.0.122:22-10.0.0.1:46648.service - OpenSSH per-connection server daemon (10.0.0.1:46648). Jul 2 00:26:34.775388 systemd-logind[1441]: Removed session 24. Jul 2 00:26:34.812740 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 46648 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:34.814587 sshd[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:34.819567 systemd-logind[1441]: New session 25 of user core. Jul 2 00:26:34.828023 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:26:35.066236 sshd[5368]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:35.078584 systemd[1]: sshd@24-10.0.0.122:22-10.0.0.1:46648.service: Deactivated successfully. Jul 2 00:26:35.081083 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:26:35.082982 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:26:35.092186 systemd[1]: Started sshd@25-10.0.0.122:22-10.0.0.1:46654.service - OpenSSH per-connection server daemon (10.0.0.1:46654). Jul 2 00:26:35.093275 systemd-logind[1441]: Removed session 25. Jul 2 00:26:35.123448 sshd[5381]: Accepted publickey for core from 10.0.0.1 port 46654 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:35.125125 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:35.129347 systemd-logind[1441]: New session 26 of user core. Jul 2 00:26:35.139009 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:26:35.251190 sshd[5381]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:35.255176 systemd[1]: sshd@25-10.0.0.122:22-10.0.0.1:46654.service: Deactivated successfully. Jul 2 00:26:35.257506 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:26:35.258266 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:26:35.259583 systemd-logind[1441]: Removed session 26. Jul 2 00:26:39.793027 kubelet[2636]: E0702 00:26:39.792976 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:40.266094 systemd[1]: Started sshd@26-10.0.0.122:22-10.0.0.1:45978.service - OpenSSH per-connection server daemon (10.0.0.1:45978). Jul 2 00:26:40.309960 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 45978 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:40.312394 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:40.317896 systemd-logind[1441]: New session 27 of user core. Jul 2 00:26:40.333136 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:26:40.486613 sshd[5428]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:40.491150 systemd[1]: sshd@26-10.0.0.122:22-10.0.0.1:45978.service: Deactivated successfully. Jul 2 00:26:40.493631 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:26:40.494313 systemd-logind[1441]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:26:40.495563 systemd-logind[1441]: Removed session 27. Jul 2 00:26:42.522639 kubelet[2636]: E0702 00:26:42.522585 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:26:45.505561 systemd[1]: Started sshd@27-10.0.0.122:22-10.0.0.1:45988.service - OpenSSH per-connection server daemon (10.0.0.1:45988). Jul 2 00:26:45.551841 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 45988 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:45.553929 sshd[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:45.571056 systemd-logind[1441]: New session 28 of user core. Jul 2 00:26:45.585302 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:26:45.716235 sshd[5470]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:45.720391 systemd[1]: sshd@27-10.0.0.122:22-10.0.0.1:45988.service: Deactivated successfully. Jul 2 00:26:45.723070 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:26:45.726469 systemd-logind[1441]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:26:45.728946 systemd-logind[1441]: Removed session 28. Jul 2 00:26:47.874317 kubelet[2636]: I0702 00:26:47.874234 2636 topology_manager.go:215] "Topology Admit Handler" podUID="7c81090d-fc95-4201-aae2-ca1193f29146" podNamespace="calico-apiserver" podName="calico-apiserver-85b7f798cc-bcngr" Jul 2 00:26:47.884084 systemd[1]: Created slice kubepods-besteffort-pod7c81090d_fc95_4201_aae2_ca1193f29146.slice - libcontainer container kubepods-besteffort-pod7c81090d_fc95_4201_aae2_ca1193f29146.slice. Jul 2 00:26:48.019146 kubelet[2636]: I0702 00:26:48.019058 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c81090d-fc95-4201-aae2-ca1193f29146-calico-apiserver-certs\") pod \"calico-apiserver-85b7f798cc-bcngr\" (UID: \"7c81090d-fc95-4201-aae2-ca1193f29146\") " pod="calico-apiserver/calico-apiserver-85b7f798cc-bcngr" Jul 2 00:26:48.019146 kubelet[2636]: I0702 00:26:48.019148 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7d2q\" (UniqueName: \"kubernetes.io/projected/7c81090d-fc95-4201-aae2-ca1193f29146-kube-api-access-d7d2q\") pod \"calico-apiserver-85b7f798cc-bcngr\" (UID: \"7c81090d-fc95-4201-aae2-ca1193f29146\") " pod="calico-apiserver/calico-apiserver-85b7f798cc-bcngr" Jul 2 00:26:48.188357 containerd[1458]: time="2024-07-02T00:26:48.188200617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b7f798cc-bcngr,Uid:7c81090d-fc95-4201-aae2-ca1193f29146,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:26:48.864191 systemd-networkd[1386]: cali24fa38eef14: Link UP Jul 2 00:26:48.865334 systemd-networkd[1386]: cali24fa38eef14: Gained carrier Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.702 [INFO][5491] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0 calico-apiserver-85b7f798cc- calico-apiserver 7c81090d-fc95-4201-aae2-ca1193f29146 1256 0 2024-07-02 00:26:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85b7f798cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-85b7f798cc-bcngr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali24fa38eef14 [] []}} ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Namespace="calico-apiserver" Pod="calico-apiserver-85b7f798cc-bcngr" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.703 [INFO][5491] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Namespace="calico-apiserver" Pod="calico-apiserver-85b7f798cc-bcngr" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.756 [INFO][5505] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" HandleID="k8s-pod-network.e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Workload="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.771 [INFO][5505] ipam_plugin.go 264: Auto assigning IP ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" HandleID="k8s-pod-network.e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Workload="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-85b7f798cc-bcngr", "timestamp":"2024-07-02 00:26:48.756178905 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.771 [INFO][5505] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.771 [INFO][5505] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.771 [INFO][5505] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.774 [INFO][5505] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" host="localhost" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.779 [INFO][5505] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.783 [INFO][5505] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.785 [INFO][5505] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.806 [INFO][5505] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.806 [INFO][5505] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" host="localhost" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.808 [INFO][5505] ipam.go 1685: Creating new handle: k8s-pod-network.e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.811 [INFO][5505] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" host="localhost" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.858 [INFO][5505] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" host="localhost" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.858 [INFO][5505] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" host="localhost" Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.858 [INFO][5505] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:26:48.961074 containerd[1458]: 2024-07-02 00:26:48.858 [INFO][5505] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" HandleID="k8s-pod-network.e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Workload="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" Jul 2 00:26:48.962948 containerd[1458]: 2024-07-02 00:26:48.862 [INFO][5491] k8s.go 386: Populated endpoint ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Namespace="calico-apiserver" Pod="calico-apiserver-85b7f798cc-bcngr" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0", GenerateName:"calico-apiserver-85b7f798cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c81090d-fc95-4201-aae2-ca1193f29146", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 26, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85b7f798cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-85b7f798cc-bcngr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24fa38eef14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:48.962948 containerd[1458]: 2024-07-02 00:26:48.862 [INFO][5491] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Namespace="calico-apiserver" Pod="calico-apiserver-85b7f798cc-bcngr" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" Jul 2 00:26:48.962948 containerd[1458]: 2024-07-02 00:26:48.862 [INFO][5491] dataplane_linux.go 68: Setting the host side veth name to cali24fa38eef14 ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Namespace="calico-apiserver" Pod="calico-apiserver-85b7f798cc-bcngr" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" Jul 2 00:26:48.962948 containerd[1458]: 2024-07-02 00:26:48.865 [INFO][5491] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Namespace="calico-apiserver" Pod="calico-apiserver-85b7f798cc-bcngr" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" Jul 2 00:26:48.962948 containerd[1458]: 2024-07-02 00:26:48.865 [INFO][5491] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Namespace="calico-apiserver" Pod="calico-apiserver-85b7f798cc-bcngr" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0", GenerateName:"calico-apiserver-85b7f798cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c81090d-fc95-4201-aae2-ca1193f29146", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 26, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85b7f798cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e", Pod:"calico-apiserver-85b7f798cc-bcngr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24fa38eef14", MAC:"3e:d9:9b:28:4b:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:48.962948 containerd[1458]: 2024-07-02 00:26:48.956 [INFO][5491] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e" Namespace="calico-apiserver" Pod="calico-apiserver-85b7f798cc-bcngr" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b7f798cc--bcngr-eth0" Jul 2 00:26:49.049924 containerd[1458]: time="2024-07-02T00:26:49.049476150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:26:49.049924 containerd[1458]: time="2024-07-02T00:26:49.049606696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:49.049924 containerd[1458]: time="2024-07-02T00:26:49.049627164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:26:49.049924 containerd[1458]: time="2024-07-02T00:26:49.049642805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:49.074477 systemd[1]: Started cri-containerd-e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e.scope - libcontainer container e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e. Jul 2 00:26:49.089936 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:26:49.118660 containerd[1458]: time="2024-07-02T00:26:49.118483826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b7f798cc-bcngr,Uid:7c81090d-fc95-4201-aae2-ca1193f29146,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e\"" Jul 2 00:26:49.120532 containerd[1458]: time="2024-07-02T00:26:49.120477872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:26:50.653045 systemd-networkd[1386]: cali24fa38eef14: Gained IPv6LL Jul 2 00:26:50.733945 systemd[1]: Started sshd@28-10.0.0.122:22-10.0.0.1:59934.service - OpenSSH per-connection server daemon (10.0.0.1:59934). Jul 2 00:26:50.790211 sshd[5576]: Accepted publickey for core from 10.0.0.1 port 59934 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:50.792618 sshd[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:50.799311 systemd-logind[1441]: New session 29 of user core. Jul 2 00:26:50.807028 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:26:51.120814 sshd[5576]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:51.126089 systemd[1]: sshd@28-10.0.0.122:22-10.0.0.1:59934.service: Deactivated successfully. Jul 2 00:26:51.128494 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:26:51.129470 systemd-logind[1441]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:26:51.130513 systemd-logind[1441]: Removed session 29. Jul 2 00:26:52.153519 containerd[1458]: time="2024-07-02T00:26:52.153333091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:52.154908 containerd[1458]: time="2024-07-02T00:26:52.154344496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:26:52.160665 containerd[1458]: time="2024-07-02T00:26:52.160618474Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:52.164023 containerd[1458]: time="2024-07-02T00:26:52.163986066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:52.164841 containerd[1458]: time="2024-07-02T00:26:52.164794290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.044278205s" Jul 2 00:26:52.164841 containerd[1458]: time="2024-07-02T00:26:52.164835707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:26:52.168234 containerd[1458]: time="2024-07-02T00:26:52.168060802Z" level=info msg="CreateContainer within sandbox \"e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:26:52.200075 containerd[1458]: time="2024-07-02T00:26:52.200016393Z" level=info msg="CreateContainer within sandbox \"e7c31ac1fa2d92c9bc2767fc41c3159262218e3cfa5495852ccb8e01e06acf2e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ede2931da89314c9c605e4d01e01446fd1b285cc9b3d05c1f9cb0f772e9637f2\"" Jul 2 00:26:52.200868 containerd[1458]: time="2024-07-02T00:26:52.200818624Z" level=info msg="StartContainer for \"ede2931da89314c9c605e4d01e01446fd1b285cc9b3d05c1f9cb0f772e9637f2\"" Jul 2 00:26:52.287254 systemd[1]: Started cri-containerd-ede2931da89314c9c605e4d01e01446fd1b285cc9b3d05c1f9cb0f772e9637f2.scope - libcontainer container ede2931da89314c9c605e4d01e01446fd1b285cc9b3d05c1f9cb0f772e9637f2. Jul 2 00:26:52.479610 containerd[1458]: time="2024-07-02T00:26:52.479352707Z" level=info msg="StartContainer for \"ede2931da89314c9c605e4d01e01446fd1b285cc9b3d05c1f9cb0f772e9637f2\" returns successfully" Jul 2 00:26:52.813685 kubelet[2636]: I0702 00:26:52.813462 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85b7f798cc-bcngr" podStartSLOduration=2.7669061150000003 podStartE2EDuration="5.813398631s" podCreationTimestamp="2024-07-02 00:26:47 +0000 UTC" firstStartedPulling="2024-07-02 00:26:49.119994231 +0000 UTC m=+110.415965676" lastFinishedPulling="2024-07-02 00:26:52.166486747 +0000 UTC m=+113.462458192" observedRunningTime="2024-07-02 00:26:52.813099057 +0000 UTC m=+114.109070502" watchObservedRunningTime="2024-07-02 00:26:52.813398631 +0000 UTC m=+114.109370076" Jul 2 00:26:56.132028 systemd[1]: Started sshd@29-10.0.0.122:22-10.0.0.1:59942.service - OpenSSH per-connection server daemon (10.0.0.1:59942). Jul 2 00:26:56.207673 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 59942 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:26:56.209828 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:56.215317 systemd-logind[1441]: New session 30 of user core. Jul 2 00:26:56.226049 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:26:56.361530 sshd[5665]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:56.367482 systemd[1]: sshd@29-10.0.0.122:22-10.0.0.1:59942.service: Deactivated successfully. Jul 2 00:26:56.374575 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:26:56.375970 systemd-logind[1441]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:26:56.377275 systemd-logind[1441]: Removed session 30. Jul 2 00:27:01.387525 systemd[1]: Started sshd@30-10.0.0.122:22-10.0.0.1:38644.service - OpenSSH per-connection server daemon (10.0.0.1:38644). Jul 2 00:27:01.424336 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 38644 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:27:01.426842 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:01.432020 systemd-logind[1441]: New session 31 of user core. Jul 2 00:27:01.442161 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 2 00:27:01.556417 sshd[5689]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:01.561235 systemd[1]: sshd@30-10.0.0.122:22-10.0.0.1:38644.service: Deactivated successfully. Jul 2 00:27:01.563376 systemd[1]: session-31.scope: Deactivated successfully. Jul 2 00:27:01.564244 systemd-logind[1441]: Session 31 logged out. Waiting for processes to exit. Jul 2 00:27:01.565244 systemd-logind[1441]: Removed session 31. Jul 2 00:27:03.230171 containerd[1458]: time="2024-07-02T00:27:03.230120191Z" level=info msg="StopPodSandbox for \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\"" Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.691 [WARNING][5718] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dda3dde4-f9bb-48b1-b9f5-9ee16ac96658", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9", Pod:"coredns-7db6d8ff4d-8fw82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99ed898119a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.691 [INFO][5718] k8s.go 608: Cleaning up netns ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.691 [INFO][5718] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" iface="eth0" netns="" Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.691 [INFO][5718] k8s.go 615: Releasing IP address(es) ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.691 [INFO][5718] utils.go 188: Calico CNI releasing IP address ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.714 [INFO][5726] ipam_plugin.go 411: Releasing address using handleID ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" HandleID="k8s-pod-network.ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.714 [INFO][5726] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.714 [INFO][5726] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.720 [WARNING][5726] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" HandleID="k8s-pod-network.ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.720 [INFO][5726] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" HandleID="k8s-pod-network.ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.722 [INFO][5726] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:27:03.727440 containerd[1458]: 2024-07-02 00:27:03.724 [INFO][5718] k8s.go 621: Teardown processing complete. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:27:03.728035 containerd[1458]: time="2024-07-02T00:27:03.727486783Z" level=info msg="TearDown network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\" successfully" Jul 2 00:27:03.728035 containerd[1458]: time="2024-07-02T00:27:03.727517892Z" level=info msg="StopPodSandbox for \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\" returns successfully" Jul 2 00:27:03.728161 containerd[1458]: time="2024-07-02T00:27:03.728128061Z" level=info msg="RemovePodSandbox for \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\"" Jul 2 00:27:03.728215 containerd[1458]: time="2024-07-02T00:27:03.728174278Z" level=info msg="Forcibly stopping sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\"" Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.763 [WARNING][5750] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dda3dde4-f9bb-48b1-b9f5-9ee16ac96658", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b90890691348bf256f0436fad81c3fa45c378d2f2b201aa2884ae1e86ea7e9d9", Pod:"coredns-7db6d8ff4d-8fw82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99ed898119a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.763 [INFO][5750] k8s.go 608: Cleaning up netns ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.763 [INFO][5750] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" iface="eth0" netns="" Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.763 [INFO][5750] k8s.go 615: Releasing IP address(es) ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.763 [INFO][5750] utils.go 188: Calico CNI releasing IP address ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.813 [INFO][5758] ipam_plugin.go 411: Releasing address using handleID ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" HandleID="k8s-pod-network.ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.814 [INFO][5758] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.814 [INFO][5758] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.818 [WARNING][5758] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" HandleID="k8s-pod-network.ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.819 [INFO][5758] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" HandleID="k8s-pod-network.ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Workload="localhost-k8s-coredns--7db6d8ff4d--8fw82-eth0" Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.820 [INFO][5758] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:27:03.825307 containerd[1458]: 2024-07-02 00:27:03.822 [INFO][5750] k8s.go 621: Teardown processing complete. ContainerID="ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764" Jul 2 00:27:03.825910 containerd[1458]: time="2024-07-02T00:27:03.825400415Z" level=info msg="TearDown network for sandbox \"ba08dddf75eed8bd7041f251885c79f8596db979eddb14fb9a6e3e733c97a764\" successfully"