Aug 5 22:26:37.467353 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 5 22:26:37.467385 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:26:37.467399 kernel: BIOS-provided physical RAM map: Aug 5 22:26:37.467408 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 5 22:26:37.467416 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 5 22:26:37.467423 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 5 22:26:37.467433 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 5 22:26:37.467443 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 5 22:26:37.467451 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 5 22:26:37.467459 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 5 22:26:37.467473 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 5 22:26:37.467482 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Aug 5 22:26:37.467490 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Aug 5 22:26:37.467499 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Aug 5 22:26:37.467511 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 5 22:26:37.467531 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 5 22:26:37.467571 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 5 22:26:37.467582 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 5 22:26:37.467592 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 5 22:26:37.467602 kernel: NX (Execute Disable) protection: active Aug 5 22:26:37.467618 kernel: APIC: Static calls initialized Aug 5 22:26:37.467629 kernel: efi: EFI v2.7 by EDK II Aug 5 22:26:37.467640 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4eb018 Aug 5 22:26:37.467650 kernel: SMBIOS 2.8 present. Aug 5 22:26:37.467660 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Aug 5 22:26:37.467670 kernel: Hypervisor detected: KVM Aug 5 22:26:37.467680 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:26:37.467695 kernel: kvm-clock: using sched offset of 9323731870 cycles Aug 5 22:26:37.467707 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:26:37.467718 kernel: tsc: Detected 2794.748 MHz processor Aug 5 22:26:37.467729 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:26:37.467740 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:26:37.467751 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 5 22:26:37.467761 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 5 22:26:37.467771 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:26:37.467785 kernel: Using GB pages for direct mapping Aug 5 22:26:37.467794 kernel: Secure boot disabled Aug 5 22:26:37.467805 kernel: ACPI: Early table checksum verification disabled Aug 5 22:26:37.467816 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 5 22:26:37.467840 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Aug 5 22:26:37.467860 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:26:37.467877 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:26:37.467898 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 5 22:26:37.467912 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:26:37.467923 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:26:37.467933 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:26:37.467949 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 5 22:26:37.467962 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Aug 5 22:26:37.467973 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Aug 5 22:26:37.467983 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 5 22:26:37.467998 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Aug 5 22:26:37.468008 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Aug 5 22:26:37.468017 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Aug 5 22:26:37.468028 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Aug 5 22:26:37.468037 kernel: No NUMA configuration found Aug 5 22:26:37.468048 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 5 22:26:37.468058 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 5 22:26:37.468068 kernel: Zone ranges: Aug 5 22:26:37.468079 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:26:37.468093 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 5 22:26:37.468103 kernel: Normal empty Aug 5 22:26:37.468114 kernel: Movable zone start for each node Aug 5 22:26:37.468124 kernel: Early memory node ranges Aug 5 22:26:37.468134 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 5 22:26:37.468144 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 5 22:26:37.468154 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 5 22:26:37.468169 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 5 22:26:37.468179 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 5 22:26:37.468193 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 5 22:26:37.468203 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 5 22:26:37.468213 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:26:37.468222 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 5 22:26:37.468232 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 5 22:26:37.468242 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:26:37.468258 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 5 22:26:37.468281 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 5 22:26:37.468292 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 5 22:26:37.468307 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 5 22:26:37.468317 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:26:37.468327 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:26:37.468337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 5 22:26:37.468346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:26:37.468356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:26:37.468366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:26:37.468375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:26:37.468385 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:26:37.468400 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 5 22:26:37.468411 kernel: TSC deadline timer available Aug 5 22:26:37.468422 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 5 22:26:37.468432 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:26:37.468443 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 5 22:26:37.468454 kernel: kvm-guest: setup PV sched yield Aug 5 22:26:37.468464 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Aug 5 22:26:37.468475 kernel: Booting paravirtualized kernel on KVM Aug 5 22:26:37.468486 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:26:37.468500 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 5 22:26:37.468511 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Aug 5 22:26:37.468521 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Aug 5 22:26:37.468531 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 5 22:26:37.468541 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:26:37.468551 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:26:37.468563 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:26:37.468575 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:26:37.468589 kernel: random: crng init done Aug 5 22:26:37.468600 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:26:37.468610 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:26:37.468621 kernel: Fallback order for Node 0: 0 Aug 5 22:26:37.468631 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 5 22:26:37.468641 kernel: Policy zone: DMA32 Aug 5 22:26:37.468651 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:26:37.468662 kernel: Memory: 2387912K/2567000K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 178828K reserved, 0K cma-reserved) Aug 5 22:26:37.468672 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 22:26:37.468686 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:26:37.468701 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:26:37.468712 kernel: Dynamic Preempt: voluntary Aug 5 22:26:37.468722 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:26:37.468733 kernel: rcu: RCU event tracing is enabled. Aug 5 22:26:37.468745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 22:26:37.468769 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:26:37.468784 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:26:37.468795 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:26:37.468807 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:26:37.468818 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 22:26:37.468846 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 5 22:26:37.468863 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:26:37.468875 kernel: Console: colour dummy device 80x25 Aug 5 22:26:37.468887 kernel: printk: console [ttyS0] enabled Aug 5 22:26:37.468899 kernel: ACPI: Core revision 20230628 Aug 5 22:26:37.468910 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 5 22:26:37.468926 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:26:37.468942 kernel: x2apic enabled Aug 5 22:26:37.468954 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:26:37.468965 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 5 22:26:37.468976 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 5 22:26:37.468987 kernel: kvm-guest: setup PV IPIs Aug 5 22:26:37.468998 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 5 22:26:37.469010 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 5 22:26:37.469022 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Aug 5 22:26:37.469038 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 5 22:26:37.469049 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 5 22:26:37.469061 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 5 22:26:37.469072 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:26:37.469083 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:26:37.469094 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:26:37.469104 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:26:37.469115 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 5 22:26:37.469126 kernel: RETBleed: Mitigation: untrained return thunk Aug 5 22:26:37.469141 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 5 22:26:37.469152 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 5 22:26:37.469164 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 5 22:26:37.469176 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 5 22:26:37.469187 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 5 22:26:37.469198 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:26:37.469210 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:26:37.469221 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:26:37.469236 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:26:37.469247 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 5 22:26:37.469258 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:26:37.469282 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:26:37.469293 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:26:37.469308 kernel: SELinux: Initializing. Aug 5 22:26:37.469319 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:26:37.469331 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:26:37.469342 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 5 22:26:37.469358 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:26:37.469369 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:26:37.469380 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:26:37.469390 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 5 22:26:37.469401 kernel: ... version: 0 Aug 5 22:26:37.469412 kernel: ... bit width: 48 Aug 5 22:26:37.469422 kernel: ... generic registers: 6 Aug 5 22:26:37.469433 kernel: ... value mask: 0000ffffffffffff Aug 5 22:26:37.469444 kernel: ... max period: 00007fffffffffff Aug 5 22:26:37.469459 kernel: ... fixed-purpose events: 0 Aug 5 22:26:37.469470 kernel: ... event mask: 000000000000003f Aug 5 22:26:37.469481 kernel: signal: max sigframe size: 1776 Aug 5 22:26:37.469492 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:26:37.469503 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:26:37.469514 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:26:37.469526 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:26:37.469537 kernel: .... node #0, CPUs: #1 #2 #3 Aug 5 22:26:37.469548 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 22:26:37.469564 kernel: smpboot: Max logical packages: 1 Aug 5 22:26:37.469575 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Aug 5 22:26:37.469586 kernel: devtmpfs: initialized Aug 5 22:26:37.469597 kernel: x86/mm: Memory block size: 128MB Aug 5 22:26:37.469608 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 5 22:26:37.469619 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 5 22:26:37.469636 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 5 22:26:37.469647 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 5 22:26:37.469658 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 5 22:26:37.469674 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:26:37.469685 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 22:26:37.469697 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:26:37.469709 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:26:37.469720 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:26:37.469731 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:26:37.469742 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:26:37.469753 kernel: audit: type=2000 audit(1722896793.888:1): state=initialized audit_enabled=0 res=1 Aug 5 22:26:37.469764 kernel: cpuidle: using governor menu Aug 5 22:26:37.469781 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:26:37.469792 kernel: dca service started, version 1.12.1 Aug 5 22:26:37.469804 kernel: PCI: Using configuration type 1 for base access Aug 5 22:26:37.469816 kernel: PCI: Using configuration type 1 for extended access Aug 5 22:26:37.469843 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:26:37.469855 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:26:37.469871 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:26:37.469883 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:26:37.469895 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:26:37.469911 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:26:37.469923 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:26:37.469935 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:26:37.469946 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:26:37.469958 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:26:37.469969 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:26:37.469981 kernel: ACPI: Interpreter enabled Aug 5 22:26:37.469992 kernel: ACPI: PM: (supports S0 S3 S5) Aug 5 22:26:37.470004 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:26:37.470020 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:26:37.470031 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:26:37.470043 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 5 22:26:37.470055 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:26:37.470434 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:26:37.470457 kernel: acpiphp: Slot [3] registered Aug 5 22:26:37.470469 kernel: acpiphp: Slot [4] registered Aug 5 22:26:37.470481 kernel: acpiphp: Slot [5] registered Aug 5 22:26:37.470498 kernel: acpiphp: Slot [6] registered Aug 5 22:26:37.470510 kernel: acpiphp: Slot [7] registered Aug 5 22:26:37.470522 kernel: acpiphp: Slot [8] registered Aug 5 22:26:37.470533 kernel: acpiphp: Slot [9] registered Aug 5 22:26:37.470545 kernel: acpiphp: Slot [10] registered Aug 5 22:26:37.470557 kernel: acpiphp: Slot [11] registered Aug 5 22:26:37.470569 kernel: acpiphp: Slot [12] registered Aug 5 22:26:37.470582 kernel: acpiphp: Slot [13] registered Aug 5 22:26:37.470594 kernel: acpiphp: Slot [14] registered Aug 5 22:26:37.470606 kernel: acpiphp: Slot [15] registered Aug 5 22:26:37.470622 kernel: acpiphp: Slot [16] registered Aug 5 22:26:37.470634 kernel: acpiphp: Slot [17] registered Aug 5 22:26:37.470645 kernel: acpiphp: Slot [18] registered Aug 5 22:26:37.470657 kernel: acpiphp: Slot [19] registered Aug 5 22:26:37.470669 kernel: acpiphp: Slot [20] registered Aug 5 22:26:37.470680 kernel: acpiphp: Slot [21] registered Aug 5 22:26:37.470692 kernel: acpiphp: Slot [22] registered Aug 5 22:26:37.470704 kernel: acpiphp: Slot [23] registered Aug 5 22:26:37.470716 kernel: acpiphp: Slot [24] registered Aug 5 22:26:37.470732 kernel: acpiphp: Slot [25] registered Aug 5 22:26:37.470744 kernel: acpiphp: Slot [26] registered Aug 5 22:26:37.470755 kernel: acpiphp: Slot [27] registered Aug 5 22:26:37.470767 kernel: acpiphp: Slot [28] registered Aug 5 22:26:37.470779 kernel: acpiphp: Slot [29] registered Aug 5 22:26:37.470791 kernel: acpiphp: Slot [30] registered Aug 5 22:26:37.470802 kernel: acpiphp: Slot [31] registered Aug 5 22:26:37.470814 kernel: PCI host bridge to bus 0000:00 Aug 5 22:26:37.471041 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:26:37.471203 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:26:37.471394 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:26:37.471572 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Aug 5 22:26:37.471728 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Aug 5 22:26:37.471938 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:26:37.472162 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:26:37.472360 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:26:37.472560 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 5 22:26:37.472723 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Aug 5 22:26:37.472927 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 5 22:26:37.473115 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 5 22:26:37.473417 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 5 22:26:37.473572 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 5 22:26:37.473743 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 5 22:26:37.473936 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 5 22:26:37.474094 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Aug 5 22:26:37.474258 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Aug 5 22:26:37.474416 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 5 22:26:37.474633 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Aug 5 22:26:37.474800 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 5 22:26:37.475092 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Aug 5 22:26:37.475248 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:26:37.475452 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:26:37.475610 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Aug 5 22:26:37.475846 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 5 22:26:37.476018 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 5 22:26:37.476228 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 5 22:26:37.476401 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 5 22:26:37.476549 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 5 22:26:37.476697 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 5 22:26:37.476881 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Aug 5 22:26:37.477034 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 5 22:26:37.477191 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Aug 5 22:26:37.477361 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 5 22:26:37.477513 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 5 22:26:37.477527 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:26:37.477538 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:26:37.477550 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:26:37.477629 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:26:37.477640 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:26:37.477651 kernel: iommu: Default domain type: Translated Aug 5 22:26:37.477661 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:26:37.477677 kernel: efivars: Registered efivars operations Aug 5 22:26:37.477687 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:26:37.477698 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:26:37.477709 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 5 22:26:37.477720 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 5 22:26:37.477730 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 5 22:26:37.477741 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 5 22:26:37.477909 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 5 22:26:37.478085 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 5 22:26:37.478241 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:26:37.478254 kernel: vgaarb: loaded Aug 5 22:26:37.478265 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 5 22:26:37.478288 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 5 22:26:37.478298 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:26:37.478310 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:26:37.478320 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:26:37.478331 kernel: pnp: PnP ACPI init Aug 5 22:26:37.478513 kernel: pnp 00:02: [dma 2] Aug 5 22:26:37.478536 kernel: pnp: PnP ACPI: found 6 devices Aug 5 22:26:37.478548 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:26:37.478559 kernel: NET: Registered PF_INET protocol family Aug 5 22:26:37.478570 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:26:37.478585 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 22:26:37.478596 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:26:37.478610 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:26:37.478623 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 22:26:37.478640 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 22:26:37.478652 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:26:37.478663 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:26:37.478675 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:26:37.478686 kernel: NET: Registered PF_XDP protocol family Aug 5 22:26:37.478870 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 5 22:26:37.479044 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 5 22:26:37.479212 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:26:37.479387 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:26:37.479542 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:26:37.479695 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Aug 5 22:26:37.479912 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Aug 5 22:26:37.480097 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 5 22:26:37.480290 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:26:37.480312 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:26:37.480329 kernel: Initialise system trusted keyrings Aug 5 22:26:37.480350 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 22:26:37.480361 kernel: Key type asymmetric registered Aug 5 22:26:37.480371 kernel: Asymmetric key parser 'x509' registered Aug 5 22:26:37.480380 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:26:37.480390 kernel: io scheduler mq-deadline registered Aug 5 22:26:37.480400 kernel: io scheduler kyber registered Aug 5 22:26:37.480410 kernel: io scheduler bfq registered Aug 5 22:26:37.480419 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:26:37.480430 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 5 22:26:37.480443 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 5 22:26:37.480454 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 5 22:26:37.480463 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:26:37.480473 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:26:37.480484 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:26:37.480525 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:26:37.480545 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:26:37.480727 kernel: rtc_cmos 00:05: RTC can wake from S4 Aug 5 22:26:37.480743 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:26:37.480923 kernel: rtc_cmos 00:05: registered as rtc0 Aug 5 22:26:37.481096 kernel: rtc_cmos 00:05: setting system clock to 2024-08-05T22:26:36 UTC (1722896796) Aug 5 22:26:37.481254 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 5 22:26:37.481283 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 5 22:26:37.481296 kernel: efifb: probing for efifb Aug 5 22:26:37.481308 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Aug 5 22:26:37.481320 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Aug 5 22:26:37.481332 kernel: efifb: scrolling: redraw Aug 5 22:26:37.481350 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Aug 5 22:26:37.481362 kernel: Console: switching to colour frame buffer device 100x37 Aug 5 22:26:37.481374 kernel: fb0: EFI VGA frame buffer device Aug 5 22:26:37.481389 kernel: pstore: Using crash dump compression: deflate Aug 5 22:26:37.481401 kernel: pstore: Registered efi_pstore as persistent store backend Aug 5 22:26:37.481413 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:26:37.481424 kernel: Segment Routing with IPv6 Aug 5 22:26:37.481435 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:26:37.481447 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:26:37.481470 kernel: Key type dns_resolver registered Aug 5 22:26:37.481480 kernel: IPI shorthand broadcast: enabled Aug 5 22:26:37.481491 kernel: sched_clock: Marking stable (2962005979, 233938506)->(3669399614, -473455129) Aug 5 22:26:37.481505 kernel: registered taskstats version 1 Aug 5 22:26:37.481515 kernel: Loading compiled-in X.509 certificates Aug 5 22:26:37.481526 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 5 22:26:37.481539 kernel: Key type .fscrypt registered Aug 5 22:26:37.481549 kernel: Key type fscrypt-provisioning registered Aug 5 22:26:37.481560 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:26:37.481570 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:26:37.481581 kernel: ima: No architecture policies found Aug 5 22:26:37.481591 kernel: clk: Disabling unused clocks Aug 5 22:26:37.481602 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 5 22:26:37.481614 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:26:37.481628 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:26:37.481638 kernel: Run /init as init process Aug 5 22:26:37.481648 kernel: with arguments: Aug 5 22:26:37.481658 kernel: /init Aug 5 22:26:37.481668 kernel: with environment: Aug 5 22:26:37.481678 kernel: HOME=/ Aug 5 22:26:37.481688 kernel: TERM=linux Aug 5 22:26:37.481698 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:26:37.481712 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:26:37.481728 systemd[1]: Detected virtualization kvm. Aug 5 22:26:37.481739 systemd[1]: Detected architecture x86-64. Aug 5 22:26:37.481750 systemd[1]: Running in initrd. Aug 5 22:26:37.481761 systemd[1]: No hostname configured, using default hostname. Aug 5 22:26:37.481772 systemd[1]: Hostname set to . Aug 5 22:26:37.481783 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:26:37.481794 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:26:37.481808 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:26:37.481820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:26:37.481848 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:26:37.481865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:26:37.481883 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:26:37.481905 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:26:37.481927 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:26:37.481944 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:26:37.481955 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:26:37.481966 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:26:37.481978 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:26:37.481989 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:26:37.481999 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:26:37.482010 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:26:37.482021 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:26:37.482036 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:26:37.482047 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:26:37.482058 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:26:37.482069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:26:37.482080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:26:37.482091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:26:37.482102 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:26:37.482114 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:26:37.482128 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:26:37.482141 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:26:37.482152 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:26:37.482164 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:26:37.482174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:26:37.482185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:26:37.482196 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:26:37.482235 systemd-journald[195]: Collecting audit messages is disabled. Aug 5 22:26:37.482265 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:26:37.482287 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:26:37.482299 systemd-journald[195]: Journal started Aug 5 22:26:37.482326 systemd-journald[195]: Runtime Journal (/run/log/journal/691be27d1e114b09913b810ab42eac5b) is 6.0M, max 48.3M, 42.3M free. Aug 5 22:26:37.490193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:26:37.508952 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:26:37.537782 systemd-modules-load[196]: Inserted module 'overlay' Aug 5 22:26:37.547560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:26:37.549440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:26:37.560499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:26:37.584471 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:26:37.589570 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:26:37.648669 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:26:37.686259 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:26:37.687904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:26:37.719480 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:26:37.736183 dracut-cmdline[222]: dracut-dracut-053 Aug 5 22:26:37.736183 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:26:37.776870 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:26:37.785855 kernel: Bridge firewalling registered Aug 5 22:26:37.791128 systemd-modules-load[196]: Inserted module 'br_netfilter' Aug 5 22:26:37.807375 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:26:37.822120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:26:37.847061 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:26:37.867145 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:26:37.925400 systemd-resolved[277]: Positive Trust Anchors: Aug 5 22:26:37.930549 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:26:37.942121 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:26:37.948756 systemd-resolved[277]: Defaulting to hostname 'linux'. Aug 5 22:26:37.950800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:26:37.974980 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:26:38.035883 kernel: SCSI subsystem initialized Aug 5 22:26:38.053903 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:26:38.077744 kernel: iscsi: registered transport (tcp) Aug 5 22:26:38.137146 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:26:38.137274 kernel: QLogic iSCSI HBA Driver Aug 5 22:26:38.299112 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:26:38.319071 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:26:38.390300 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:26:38.390399 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:26:38.391693 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:26:38.505880 kernel: raid6: avx2x4 gen() 11274 MB/s Aug 5 22:26:38.522890 kernel: raid6: avx2x2 gen() 10157 MB/s Aug 5 22:26:38.540372 kernel: raid6: avx2x1 gen() 12070 MB/s Aug 5 22:26:38.540470 kernel: raid6: using algorithm avx2x1 gen() 12070 MB/s Aug 5 22:26:38.591473 kernel: raid6: .... xor() 10649 MB/s, rmw enabled Aug 5 22:26:38.591626 kernel: raid6: using avx2x2 recovery algorithm Aug 5 22:26:38.653021 kernel: xor: automatically using best checksumming function avx Aug 5 22:26:39.049293 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:26:39.112192 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:26:39.143622 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:26:39.193323 systemd-udevd[415]: Using default interface naming scheme 'v255'. Aug 5 22:26:39.209675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:26:39.255547 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:26:39.321959 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Aug 5 22:26:39.424619 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:26:39.439356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:26:39.626679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:26:39.705661 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:26:39.737349 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:26:39.746546 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:26:39.747329 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:26:39.748169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:26:39.781164 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:26:39.796909 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 5 22:26:39.840292 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 22:26:39.840534 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:26:39.840553 kernel: libata version 3.00 loaded. Aug 5 22:26:39.840582 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:26:39.840599 kernel: GPT:9289727 != 19775487 Aug 5 22:26:39.840614 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:26:39.840629 kernel: GPT:9289727 != 19775487 Aug 5 22:26:39.840643 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:26:39.840658 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:26:39.802596 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:26:39.828179 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:26:39.828713 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:26:39.849276 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:26:39.853930 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:26:39.854226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:26:39.863727 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:26:39.911776 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 5 22:26:39.954288 kernel: scsi host0: ata_piix Aug 5 22:26:39.954577 kernel: scsi host1: ata_piix Aug 5 22:26:39.954810 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Aug 5 22:26:39.954874 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Aug 5 22:26:39.954897 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:26:39.954931 kernel: AES CTR mode by8 optimization enabled Aug 5 22:26:39.923034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:26:39.983849 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 22:26:40.001249 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (469) Aug 5 22:26:40.027183 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 22:26:40.034016 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (480) Aug 5 22:26:40.039704 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 22:26:40.045817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:26:40.046036 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:26:40.078404 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 22:26:40.107868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:26:40.131404 kernel: ata2: found unknown device (class 0) Aug 5 22:26:40.131442 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 5 22:26:40.131457 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 5 22:26:40.142238 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:26:40.162344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:26:40.221694 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:26:40.228231 disk-uuid[553]: Primary Header is updated. Aug 5 22:26:40.228231 disk-uuid[553]: Secondary Entries is updated. Aug 5 22:26:40.228231 disk-uuid[553]: Secondary Header is updated. Aug 5 22:26:40.244413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:26:40.257969 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:26:40.279860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:26:40.318869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:26:40.320905 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:26:40.350026 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 5 22:26:40.385326 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 22:26:40.385352 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Aug 5 22:26:41.403808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:26:41.427704 disk-uuid[561]: The operation has completed successfully. Aug 5 22:26:41.563715 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:26:41.563922 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:26:41.621182 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:26:41.668316 sh[597]: Success Aug 5 22:26:41.742120 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 5 22:26:41.861082 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:26:41.913517 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:26:41.945171 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:26:42.044180 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 5 22:26:42.044252 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:26:42.044284 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:26:42.049092 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:26:42.049190 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:26:42.094863 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:26:42.099023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:26:42.130766 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:26:42.145403 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:26:42.188349 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:26:42.188445 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:26:42.188480 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:26:42.205089 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:26:42.228169 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:26:42.234402 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:26:42.444025 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:26:42.574775 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:26:42.635741 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:26:42.680727 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:26:42.774439 systemd-networkd[778]: lo: Link UP Aug 5 22:26:42.774460 systemd-networkd[778]: lo: Gained carrier Aug 5 22:26:42.783358 systemd-networkd[778]: Enumeration completed Aug 5 22:26:42.783555 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:26:42.794207 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:26:42.794217 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:26:42.804623 systemd[1]: Reached target network.target - Network. Aug 5 22:26:42.824139 systemd-networkd[778]: eth0: Link UP Aug 5 22:26:42.824146 systemd-networkd[778]: eth0: Gained carrier Aug 5 22:26:42.824161 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:26:42.920027 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:26:42.989649 ignition[777]: Ignition 2.19.0 Aug 5 22:26:42.989675 ignition[777]: Stage: fetch-offline Aug 5 22:26:42.996596 ignition[777]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:26:42.996642 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:26:43.005856 ignition[777]: parsed url from cmdline: "" Aug 5 22:26:43.005866 ignition[777]: no config URL provided Aug 5 22:26:43.005879 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:26:43.005901 ignition[777]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:26:43.005954 ignition[777]: op(1): [started] loading QEMU firmware config module Aug 5 22:26:43.005962 ignition[777]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 22:26:43.031794 ignition[777]: op(1): [finished] loading QEMU firmware config module Aug 5 22:26:43.096910 ignition[777]: parsing config with SHA512: 4ae244b117cf7c2a88e46e52de934f606ae353816de7477433b37f6bc06680a9a21510c3052686c5790af10b646952fac05fe391f51f12ad2092c6a88e28f515 Aug 5 22:26:43.111770 unknown[777]: fetched base config from "system" Aug 5 22:26:43.111784 unknown[777]: fetched user config from "qemu" Aug 5 22:26:43.115227 ignition[777]: fetch-offline: fetch-offline passed Aug 5 22:26:43.115385 ignition[777]: Ignition finished successfully Aug 5 22:26:43.130660 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:26:43.148527 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 22:26:43.164509 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:26:43.232918 ignition[790]: Ignition 2.19.0 Aug 5 22:26:43.232935 ignition[790]: Stage: kargs Aug 5 22:26:43.233190 ignition[790]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:26:43.233210 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:26:43.234279 ignition[790]: kargs: kargs passed Aug 5 22:26:43.234338 ignition[790]: Ignition finished successfully Aug 5 22:26:43.242552 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:26:43.258434 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:26:43.284966 ignition[798]: Ignition 2.19.0 Aug 5 22:26:43.284999 ignition[798]: Stage: disks Aug 5 22:26:43.285323 ignition[798]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:26:43.285343 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:26:43.294487 ignition[798]: disks: disks passed Aug 5 22:26:43.294631 ignition[798]: Ignition finished successfully Aug 5 22:26:43.300161 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:26:43.302691 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:26:43.306411 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:26:43.308451 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:26:43.308780 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:26:43.311652 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:26:43.332358 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:26:43.365599 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:26:43.390298 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:26:43.410064 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:26:43.743915 kernel: EXT4-fs (vda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 5 22:26:43.747140 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:26:43.749361 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:26:43.766662 systemd-resolved[277]: Detected conflict on linux IN A 10.0.0.26 Aug 5 22:26:43.766683 systemd-resolved[277]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Aug 5 22:26:43.779554 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:26:43.789510 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:26:43.823311 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) Aug 5 22:26:43.791491 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:26:43.846199 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:26:43.846251 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:26:43.846265 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:26:43.846278 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:26:43.791648 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:26:43.791696 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:26:43.808337 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:26:43.834277 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:26:43.881800 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:26:44.046995 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:26:44.061487 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:26:44.070522 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:26:44.088433 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:26:44.444492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:26:44.466887 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:26:44.487636 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:26:44.522096 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:26:44.532926 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:26:44.630058 ignition[930]: INFO : Ignition 2.19.0 Aug 5 22:26:44.630058 ignition[930]: INFO : Stage: mount Aug 5 22:26:44.630490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:26:44.651999 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:26:44.651999 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:26:44.651999 ignition[930]: INFO : mount: mount passed Aug 5 22:26:44.651999 ignition[930]: INFO : Ignition finished successfully Aug 5 22:26:44.650950 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:26:44.675992 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:26:44.770932 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:26:44.798587 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) Aug 5 22:26:44.805868 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:26:44.805957 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:26:44.805974 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:26:44.820876 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:26:44.825251 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:26:44.892300 systemd-networkd[778]: eth0: Gained IPv6LL Aug 5 22:26:44.899195 ignition[960]: INFO : Ignition 2.19.0 Aug 5 22:26:44.899195 ignition[960]: INFO : Stage: files Aug 5 22:26:44.899195 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:26:44.899195 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:26:44.899195 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:26:44.908959 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:26:44.908959 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:26:44.916943 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:26:44.923200 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:26:44.939203 unknown[960]: wrote ssh authorized keys file for user: core Aug 5 22:26:44.941570 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:26:44.950652 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:26:44.950652 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:26:45.016917 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:26:45.182625 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:26:45.182625 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:26:45.194243 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 5 22:26:45.608806 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:26:47.915423 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:26:47.915423 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:26:47.933866 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:26:47.933866 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:26:47.933866 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:26:47.933866 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 5 22:26:47.933866 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:26:47.933866 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:26:47.933866 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 5 22:26:47.933866 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 22:26:48.208918 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:26:48.411589 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:26:48.411589 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 22:26:48.411589 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:26:48.411589 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:26:48.440958 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:26:48.440958 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:26:48.440958 ignition[960]: INFO : files: files passed Aug 5 22:26:48.440958 ignition[960]: INFO : Ignition finished successfully Aug 5 22:26:48.458108 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:26:48.492671 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:26:48.515209 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:26:48.529036 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:26:48.529239 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:26:48.558011 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 22:26:48.567538 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:26:48.571993 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:26:48.571993 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:26:48.574213 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:26:48.586528 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:26:48.622591 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:26:48.680474 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:26:48.682718 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:26:48.692283 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:26:48.694558 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:26:48.699223 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:26:48.719187 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:26:48.748420 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:26:48.767146 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:26:48.803783 systemd[1]: Stopped target network.target - Network. Aug 5 22:26:48.812420 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:26:48.813749 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:26:48.814714 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:26:48.820996 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:26:48.821196 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:26:48.830146 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:26:48.841715 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:26:48.844381 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:26:48.855959 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:26:48.869783 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:26:48.873302 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:26:48.875047 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:26:48.892428 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:26:48.898334 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:26:48.903944 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:26:48.907676 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:26:48.907923 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:26:48.930729 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:26:48.935356 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:26:48.942303 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:26:48.942705 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:26:48.947163 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:26:48.947374 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:26:48.952990 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:26:48.953203 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:26:48.955131 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:26:48.956500 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:26:48.959981 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:26:48.962822 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:26:48.965202 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:26:48.966647 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:26:48.966847 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:26:48.968355 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:26:48.968499 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:26:48.970016 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:26:48.970204 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:26:48.976769 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:26:48.977013 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:26:48.992586 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:26:48.995766 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:26:48.995999 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:26:48.999032 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:26:49.008924 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:26:49.011527 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:26:49.015086 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:26:49.015360 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:26:49.019577 systemd-networkd[778]: eth0: DHCPv6 lease lost Aug 5 22:26:49.024125 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:26:49.042108 ignition[1016]: INFO : Ignition 2.19.0 Aug 5 22:26:49.042108 ignition[1016]: INFO : Stage: umount Aug 5 22:26:49.042108 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:26:49.042108 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:26:49.042108 ignition[1016]: INFO : umount: umount passed Aug 5 22:26:49.042108 ignition[1016]: INFO : Ignition finished successfully Aug 5 22:26:49.024339 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:26:49.035286 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:26:49.035473 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:26:49.045722 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:26:49.045953 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:26:49.068016 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:26:49.068196 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:26:49.074341 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:26:49.082225 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:26:49.084489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:26:49.091267 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:26:49.091375 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:26:49.093049 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:26:49.093129 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:26:49.097750 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:26:49.098496 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:26:49.115100 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:26:49.115191 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:26:49.123432 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:26:49.123559 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:26:49.131413 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:26:49.132316 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:26:49.146000 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:26:49.147284 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:26:49.147383 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:26:49.150233 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:26:49.150318 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:26:49.150639 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:26:49.150716 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:26:49.150806 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:26:49.150908 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:26:49.151142 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:26:49.162969 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:26:49.163139 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:26:49.187071 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:26:49.187265 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:26:49.191279 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:26:49.191525 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:26:49.201934 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:26:49.202064 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:26:49.222422 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:26:49.222519 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:26:49.224189 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:26:49.224282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:26:49.241364 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:26:49.241487 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:26:49.241919 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:26:49.241996 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:26:49.266256 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:26:49.273025 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:26:49.273171 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:26:49.275128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:26:49.275221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:26:49.296168 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:26:49.296366 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:26:49.313125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:26:49.339196 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:26:49.356730 systemd[1]: Switching root. Aug 5 22:26:49.407223 systemd-journald[195]: Journal stopped Aug 5 22:26:51.511863 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Aug 5 22:26:51.511996 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:26:51.512011 kernel: SELinux: policy capability open_perms=1 Aug 5 22:26:51.512023 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:26:51.512035 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:26:51.512056 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:26:51.512068 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:26:51.512087 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:26:51.512099 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:26:51.512118 kernel: audit: type=1403 audit(1722896810.118:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:26:51.512131 systemd[1]: Successfully loaded SELinux policy in 78.266ms. Aug 5 22:26:51.512166 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.669ms. Aug 5 22:26:51.512192 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:26:51.512219 systemd[1]: Detected virtualization kvm. Aug 5 22:26:51.512236 systemd[1]: Detected architecture x86-64. Aug 5 22:26:51.512253 systemd[1]: Detected first boot. Aug 5 22:26:51.512276 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:26:51.512298 zram_generator::config[1059]: No configuration found. Aug 5 22:26:51.512320 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:26:51.512333 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:26:51.512345 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:26:51.512358 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:26:51.512372 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:26:51.512385 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:26:51.512402 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:26:51.512415 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:26:51.512431 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:26:51.512444 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:26:51.512464 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:26:51.512477 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:26:51.512490 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:26:51.512503 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:26:51.512516 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:26:51.512529 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:26:51.512541 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:26:51.512557 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:26:51.512569 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:26:51.512582 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:26:51.512595 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:26:51.512607 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:26:51.512619 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:26:51.512632 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:26:51.512652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:26:51.512664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:26:51.512677 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:26:51.512689 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:26:51.512702 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:26:51.512714 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:26:51.512730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:26:51.512746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:26:51.512759 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:26:51.512771 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:26:51.512787 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:26:51.512799 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:26:51.512814 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:26:51.512851 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:26:51.512865 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:26:51.512891 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:26:51.512908 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:26:51.512925 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:26:51.512948 systemd[1]: Reached target machines.target - Containers. Aug 5 22:26:51.512974 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:26:51.513001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:26:51.513019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:26:51.513037 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:26:51.513054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:26:51.513071 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:26:51.513087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:26:51.513105 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:26:51.513128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:26:51.513154 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:26:51.513172 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:26:51.513189 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:26:51.513206 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:26:51.513223 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:26:51.513241 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:26:51.513258 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:26:51.513275 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:26:51.513297 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:26:51.513314 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:26:51.513331 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:26:51.513353 systemd[1]: Stopped verity-setup.service. Aug 5 22:26:51.513372 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:26:51.513393 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:26:51.513410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:26:51.513427 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:26:51.513449 kernel: fuse: init (API version 7.39) Aug 5 22:26:51.513466 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:26:51.513483 kernel: loop: module loaded Aug 5 22:26:51.513499 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:26:51.513517 kernel: ACPI: bus type drm_connector registered Aug 5 22:26:51.513603 systemd-journald[1128]: Collecting audit messages is disabled. Aug 5 22:26:51.513644 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:26:51.513662 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:26:51.513679 systemd-journald[1128]: Journal started Aug 5 22:26:51.513714 systemd-journald[1128]: Runtime Journal (/run/log/journal/691be27d1e114b09913b810ab42eac5b) is 6.0M, max 48.3M, 42.3M free. Aug 5 22:26:51.181876 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:26:51.210499 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 22:26:51.211303 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:26:51.211996 systemd[1]: systemd-journald.service: Consumed 1.398s CPU time. Aug 5 22:26:51.518526 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:26:51.521318 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:26:51.521649 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:26:51.534759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:26:51.535105 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:26:51.537214 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:26:51.537512 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:26:51.539273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:26:51.539614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:26:51.541459 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:26:51.541747 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:26:51.543350 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:26:51.543606 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:26:51.545349 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:26:51.546947 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:26:51.549228 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:26:51.572227 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:26:51.583187 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:26:51.589946 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:26:51.591567 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:26:51.591630 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:26:51.594592 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:26:51.599189 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:26:51.606066 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:26:51.607772 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:26:51.610811 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:26:51.622046 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:26:51.623687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:26:51.627981 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:26:51.629515 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:26:51.632703 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:26:51.636343 systemd-journald[1128]: Time spent on flushing to /var/log/journal/691be27d1e114b09913b810ab42eac5b is 23.501ms for 990 entries. Aug 5 22:26:51.636343 systemd-journald[1128]: System Journal (/var/log/journal/691be27d1e114b09913b810ab42eac5b) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:26:51.809708 systemd-journald[1128]: Received client request to flush runtime journal. Aug 5 22:26:51.809755 kernel: loop0: detected capacity change from 0 to 80568 Aug 5 22:26:51.809773 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:26:51.809909 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:26:51.809927 kernel: loop1: detected capacity change from 0 to 209816 Aug 5 22:26:51.640185 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:26:51.665495 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:26:51.667651 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:26:51.669394 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:26:51.671289 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:26:51.708946 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:26:51.715125 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:26:51.732188 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:26:51.736058 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:26:51.752145 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 22:26:51.785360 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:26:51.789365 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:26:51.802223 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:26:51.805178 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:26:51.809783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:26:51.815097 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:26:51.844692 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:26:51.849325 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:26:51.856143 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Aug 5 22:26:51.856168 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Aug 5 22:26:51.861909 kernel: loop2: detected capacity change from 0 to 139760 Aug 5 22:26:51.865621 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:26:51.908363 kernel: loop3: detected capacity change from 0 to 80568 Aug 5 22:26:51.923913 kernel: loop4: detected capacity change from 0 to 209816 Aug 5 22:26:51.937886 kernel: loop5: detected capacity change from 0 to 139760 Aug 5 22:26:51.949857 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 22:26:51.951184 (sd-merge)[1200]: Merged extensions into '/usr'. Aug 5 22:26:51.958470 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:26:51.958492 systemd[1]: Reloading... Aug 5 22:26:52.303976 zram_generator::config[1224]: No configuration found. Aug 5 22:26:52.507231 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:26:52.584055 systemd[1]: Reloading finished in 624 ms. Aug 5 22:26:52.708247 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:26:52.746943 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:26:52.779136 systemd[1]: Starting ensure-sysext.service... Aug 5 22:26:52.876461 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:26:52.898109 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:26:52.898136 systemd[1]: Reloading... Aug 5 22:26:52.926483 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:26:52.926916 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:26:52.927936 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:26:52.928248 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Aug 5 22:26:52.928329 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Aug 5 22:26:52.931755 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:26:52.931769 systemd-tmpfiles[1261]: Skipping /boot Aug 5 22:26:52.987761 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:26:52.987783 systemd-tmpfiles[1261]: Skipping /boot Aug 5 22:26:53.003875 zram_generator::config[1286]: No configuration found. Aug 5 22:26:53.133438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:26:53.186809 systemd[1]: Reloading finished in 288 ms. Aug 5 22:26:53.210461 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:26:53.271991 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:26:53.286315 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:26:53.499329 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:26:53.506541 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:26:53.514371 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:26:53.593254 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:26:53.599494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:26:53.599735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:26:53.601335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:26:53.604575 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:26:53.685926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:26:53.687386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:26:53.690692 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:26:53.692334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:26:53.694174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:26:53.694461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:26:53.696494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:26:53.696684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:26:53.768586 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:26:53.768898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:26:53.780838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:26:53.792380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:26:53.816460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:26:53.823877 augenrules[1353]: No rules Aug 5 22:26:53.872799 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:26:53.907735 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:26:53.916751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:26:53.920156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:26:53.920416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:26:53.921542 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:26:53.971576 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:26:53.974247 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:26:53.977213 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:26:53.977485 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:26:53.979898 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:26:53.980146 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:26:54.012375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:26:54.012659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:26:54.019235 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:26:54.019488 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:26:54.072942 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:26:54.080192 systemd[1]: Finished ensure-sysext.service. Aug 5 22:26:54.091000 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:26:54.096519 systemd-resolved[1336]: Positive Trust Anchors: Aug 5 22:26:54.096548 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:26:54.096595 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:26:54.153696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:26:54.153949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:26:54.155477 systemd-resolved[1336]: Defaulting to hostname 'linux'. Aug 5 22:26:54.169316 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 22:26:54.206814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:26:54.299022 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:26:54.300334 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:26:54.300669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:26:54.302347 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:26:54.306188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:26:54.323922 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:26:54.336625 systemd-udevd[1376]: Using default interface naming scheme 'v255'. Aug 5 22:26:54.430167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:26:54.506115 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:26:54.591590 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1388) Aug 5 22:26:54.719296 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Aug 5 22:26:54.677939 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:26:54.740942 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 5 22:26:54.741412 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 22:26:54.753863 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:26:54.778987 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Aug 5 22:26:54.787850 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 5 22:26:54.798681 systemd-networkd[1399]: lo: Link UP Aug 5 22:26:54.798699 systemd-networkd[1399]: lo: Gained carrier Aug 5 22:26:54.802070 systemd-networkd[1399]: Enumeration completed Aug 5 22:26:54.803331 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:26:54.803347 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:26:54.804270 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:26:54.852226 systemd-networkd[1399]: eth0: Link UP Aug 5 22:26:54.852245 systemd-networkd[1399]: eth0: Gained carrier Aug 5 22:26:54.852280 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:26:54.852957 systemd[1]: Reached target network.target - Network. Aug 5 22:26:54.868965 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:26:54.870284 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Aug 5 22:26:54.871089 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:26:55.568821 systemd-timesyncd[1375]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 22:26:55.568885 systemd-timesyncd[1375]: Initial clock synchronization to Mon 2024-08-05 22:26:55.568643 UTC. Aug 5 22:26:55.568943 systemd-resolved[1336]: Clock change detected. Flushing caches. Aug 5 22:26:55.570729 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:26:55.571823 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:26:55.599362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:26:55.736862 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:26:55.788902 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:26:55.798818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:26:55.799057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:26:55.802357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:26:55.804092 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:26:55.863947 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:26:55.869162 kernel: kvm_amd: TSC scaling supported Aug 5 22:26:55.869291 kernel: kvm_amd: Nested Virtualization enabled Aug 5 22:26:55.869313 kernel: kvm_amd: Nested Paging enabled Aug 5 22:26:55.869331 kernel: kvm_amd: LBR virtualization supported Aug 5 22:26:55.870240 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 5 22:26:55.870273 kernel: kvm_amd: Virtual GIF supported Aug 5 22:26:55.916751 kernel: EDAC MC: Ver: 3.0.0 Aug 5 22:26:55.940596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:26:55.978389 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:26:56.057886 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:26:56.126391 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:26:56.162316 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:26:56.182026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:26:56.183567 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:26:56.185214 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:26:56.186843 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:26:56.188558 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:26:56.236478 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:26:56.238070 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:26:56.239573 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:26:56.239639 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:26:56.240787 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:26:56.242935 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:26:56.246184 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:26:56.261460 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:26:56.314516 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:26:56.316640 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:26:56.318343 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:26:56.319732 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:26:56.321016 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:26:56.321049 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:26:56.322251 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:26:56.324983 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:26:56.327669 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:26:56.329977 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:26:56.384602 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:26:56.386161 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:26:56.387585 jq[1433]: false Aug 5 22:26:56.388215 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:26:56.391861 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:26:56.397357 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:26:56.477752 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:26:56.484985 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:26:56.487331 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:26:56.488064 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:26:56.492721 extend-filesystems[1434]: Found loop3 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found loop4 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found loop5 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found sr0 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found vda Aug 5 22:26:56.492721 extend-filesystems[1434]: Found vda1 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found vda2 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found vda3 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found usr Aug 5 22:26:56.492721 extend-filesystems[1434]: Found vda4 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found vda6 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found vda7 Aug 5 22:26:56.492721 extend-filesystems[1434]: Found vda9 Aug 5 22:26:56.492721 extend-filesystems[1434]: Checking size of /dev/vda9 Aug 5 22:26:56.496073 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:26:56.494523 dbus-daemon[1432]: [system] SELinux support is enabled Aug 5 22:26:56.542801 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:26:56.616268 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:26:56.620382 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:26:56.622700 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:26:56.622921 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:26:56.623314 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:26:56.623515 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:26:56.628113 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:26:56.630127 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:26:56.634521 jq[1449]: true Aug 5 22:26:56.644529 extend-filesystems[1434]: Resized partition /dev/vda9 Aug 5 22:26:56.643403 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:26:56.669869 jq[1459]: true Aug 5 22:26:56.686667 extend-filesystems[1463]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:26:56.712346 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1387) Aug 5 22:26:56.712403 tar[1454]: linux-amd64/helm Aug 5 22:26:56.687561 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:26:56.720099 update_engine[1446]: I0805 22:26:56.688874 1446 main.cc:92] Flatcar Update Engine starting Aug 5 22:26:56.720099 update_engine[1446]: I0805 22:26:56.713610 1446 update_check_scheduler.cc:74] Next update check in 9m0s Aug 5 22:26:56.712527 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:26:56.712552 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:26:56.714241 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:26:56.714269 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:26:56.715543 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Aug 5 22:26:56.715574 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:26:56.716819 systemd-logind[1445]: New seat seat0. Aug 5 22:26:56.718863 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:26:56.722109 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:26:56.737111 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:26:56.969394 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 22:26:57.001808 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:26:57.035513 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:26:57.049193 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:26:57.052531 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:41452.service - OpenSSH per-connection server daemon (10.0.0.1:41452). Aug 5 22:26:57.125315 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:26:57.125614 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:26:57.139203 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:26:57.256946 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:26:57.294125 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:26:57.296699 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:26:57.360771 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:26:57.361290 systemd-networkd[1399]: eth0: Gained IPv6LL Aug 5 22:26:57.370259 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:26:57.372466 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:26:57.469941 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 22:26:57.474271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:26:57.492959 tar[1454]: linux-amd64/LICENSE Aug 5 22:26:57.493205 tar[1454]: linux-amd64/README.md Aug 5 22:26:57.495477 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:26:57.557785 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:26:57.601612 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:26:57.627354 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 22:26:57.627772 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 22:26:57.631639 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:26:57.652352 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:26:57.706150 sshd[1497]: Connection closed by authenticating user core 10.0.0.1 port 41452 [preauth] Aug 5 22:26:57.709651 systemd[1]: sshd@0-10.0.0.26:22-10.0.0.1:41452.service: Deactivated successfully. Aug 5 22:26:57.812729 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 22:26:58.184167 containerd[1455]: time="2024-08-05T22:26:58.183945096Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 22:26:58.184500 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 22:26:58.184500 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:26:58.184500 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 22:26:58.191937 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Aug 5 22:26:58.192717 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:26:58.192212 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:26:58.195438 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:26:58.195745 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:26:58.200756 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:26:58.222043 containerd[1455]: time="2024-08-05T22:26:58.221953676Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:26:58.222043 containerd[1455]: time="2024-08-05T22:26:58.222019128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:26:58.224287 containerd[1455]: time="2024-08-05T22:26:58.224233371Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:26:58.224287 containerd[1455]: time="2024-08-05T22:26:58.224269088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:26:58.224626 containerd[1455]: time="2024-08-05T22:26:58.224589208Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:26:58.224626 containerd[1455]: time="2024-08-05T22:26:58.224617211Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:26:58.224810 containerd[1455]: time="2024-08-05T22:26:58.224780246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:26:58.224905 containerd[1455]: time="2024-08-05T22:26:58.224878260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:26:58.224905 containerd[1455]: time="2024-08-05T22:26:58.224898829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:26:58.225049 containerd[1455]: time="2024-08-05T22:26:58.225020126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:26:58.225369 containerd[1455]: time="2024-08-05T22:26:58.225337081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:26:58.225369 containerd[1455]: time="2024-08-05T22:26:58.225365975Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:26:58.225429 containerd[1455]: time="2024-08-05T22:26:58.225379510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:26:58.225577 containerd[1455]: time="2024-08-05T22:26:58.225534781Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:26:58.225577 containerd[1455]: time="2024-08-05T22:26:58.225569386Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:26:58.225700 containerd[1455]: time="2024-08-05T22:26:58.225654386Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:26:58.225700 containerd[1455]: time="2024-08-05T22:26:58.225674303Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:26:58.230753 containerd[1455]: time="2024-08-05T22:26:58.230662237Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:26:58.230753 containerd[1455]: time="2024-08-05T22:26:58.230723051Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:26:58.230753 containerd[1455]: time="2024-08-05T22:26:58.230749080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:26:58.230874 containerd[1455]: time="2024-08-05T22:26:58.230802961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:26:58.230874 containerd[1455]: time="2024-08-05T22:26:58.230823930Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:26:58.230874 containerd[1455]: time="2024-08-05T22:26:58.230839139Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:26:58.230874 containerd[1455]: time="2024-08-05T22:26:58.230854768Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:26:58.231032 containerd[1455]: time="2024-08-05T22:26:58.231001403Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:26:58.231032 containerd[1455]: time="2024-08-05T22:26:58.231028654Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:26:58.231099 containerd[1455]: time="2024-08-05T22:26:58.231046989Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:26:58.231099 containerd[1455]: time="2024-08-05T22:26:58.231089899Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:26:58.231152 containerd[1455]: time="2024-08-05T22:26:58.231109336Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:26:58.231152 containerd[1455]: time="2024-08-05T22:26:58.231128732Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:26:58.231152 containerd[1455]: time="2024-08-05T22:26:58.231145694Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:26:58.231224 containerd[1455]: time="2024-08-05T22:26:58.231161914Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:26:58.231224 containerd[1455]: time="2024-08-05T22:26:58.231179097Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:26:58.231224 containerd[1455]: time="2024-08-05T22:26:58.231195628Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:26:58.231224 containerd[1455]: time="2024-08-05T22:26:58.231210956Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:26:58.231347 containerd[1455]: time="2024-08-05T22:26:58.231225804Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:26:58.231404 containerd[1455]: time="2024-08-05T22:26:58.231380614Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:26:58.231734 containerd[1455]: time="2024-08-05T22:26:58.231707878Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:26:58.231810 containerd[1455]: time="2024-08-05T22:26:58.231744397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.231810 containerd[1455]: time="2024-08-05T22:26:58.231766658Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:26:58.231890 containerd[1455]: time="2024-08-05T22:26:58.231809619Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:26:58.231931 containerd[1455]: time="2024-08-05T22:26:58.231902403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.231931 containerd[1455]: time="2024-08-05T22:26:58.231923643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.231994 containerd[1455]: time="2024-08-05T22:26:58.231949812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.231994 containerd[1455]: time="2024-08-05T22:26:58.231966223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.231994 containerd[1455]: time="2024-08-05T22:26:58.231982533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232070 containerd[1455]: time="2024-08-05T22:26:58.231998433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232070 containerd[1455]: time="2024-08-05T22:26:58.232016016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232070 containerd[1455]: time="2024-08-05T22:26:58.232030373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232070 containerd[1455]: time="2024-08-05T22:26:58.232045802Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:26:58.232309 containerd[1455]: time="2024-08-05T22:26:58.232270163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232309 containerd[1455]: time="2024-08-05T22:26:58.232304748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232376 containerd[1455]: time="2024-08-05T22:26:58.232321138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232376 containerd[1455]: time="2024-08-05T22:26:58.232337439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232376 containerd[1455]: time="2024-08-05T22:26:58.232353309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232376 containerd[1455]: time="2024-08-05T22:26:58.232371493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232522 containerd[1455]: time="2024-08-05T22:26:58.232386260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232522 containerd[1455]: time="2024-08-05T22:26:58.232400487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:26:58.232924 containerd[1455]: time="2024-08-05T22:26:58.232815736Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:26:58.232924 containerd[1455]: time="2024-08-05T22:26:58.232913058Z" level=info msg="Connect containerd service" Aug 5 22:26:58.233188 containerd[1455]: time="2024-08-05T22:26:58.232950278Z" level=info msg="using legacy CRI server" Aug 5 22:26:58.233188 containerd[1455]: time="2024-08-05T22:26:58.232961980Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:26:58.233188 containerd[1455]: time="2024-08-05T22:26:58.233068360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:26:58.233853 containerd[1455]: time="2024-08-05T22:26:58.233817154Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:26:58.233901 containerd[1455]: time="2024-08-05T22:26:58.233886594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:26:58.233938 containerd[1455]: time="2024-08-05T22:26:58.233910138Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:26:58.233938 containerd[1455]: time="2024-08-05T22:26:58.233923223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:26:58.233989 containerd[1455]: time="2024-08-05T22:26:58.233938421Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:26:58.234056 containerd[1455]: time="2024-08-05T22:26:58.233986261Z" level=info msg="Start subscribing containerd event" Aug 5 22:26:58.234086 containerd[1455]: time="2024-08-05T22:26:58.234064788Z" level=info msg="Start recovering state" Aug 5 22:26:58.234159 containerd[1455]: time="2024-08-05T22:26:58.234136272Z" level=info msg="Start event monitor" Aug 5 22:26:58.234159 containerd[1455]: time="2024-08-05T22:26:58.234149748Z" level=info msg="Start snapshots syncer" Aug 5 22:26:58.234223 containerd[1455]: time="2024-08-05T22:26:58.234159316Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:26:58.234223 containerd[1455]: time="2024-08-05T22:26:58.234169124Z" level=info msg="Start streaming server" Aug 5 22:26:58.234330 containerd[1455]: time="2024-08-05T22:26:58.234309056Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:26:58.234404 containerd[1455]: time="2024-08-05T22:26:58.234385850Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:26:58.234481 containerd[1455]: time="2024-08-05T22:26:58.234464929Z" level=info msg="containerd successfully booted in 0.196841s" Aug 5 22:26:58.234589 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:26:59.509494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:26:59.515544 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:26:59.527896 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:26:59.528808 systemd[1]: Startup finished in 3.229s (kernel) + 13.147s (initrd) + 8.793s (userspace) = 25.170s. Aug 5 22:27:01.186165 kubelet[1549]: E0805 22:27:01.185975 1549 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:27:01.192987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:27:01.193246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:27:01.193815 systemd[1]: kubelet.service: Consumed 1.913s CPU time. Aug 5 22:27:07.735990 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:58440.service - OpenSSH per-connection server daemon (10.0.0.1:58440). Aug 5 22:27:07.795918 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 58440 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:27:07.799778 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:07.823895 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:27:07.842855 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:27:07.853087 systemd-logind[1445]: New session 1 of user core. Aug 5 22:27:07.871098 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:27:07.896855 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:27:07.902754 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:08.076497 systemd[1569]: Queued start job for default target default.target. Aug 5 22:27:08.087049 systemd[1569]: Created slice app.slice - User Application Slice. Aug 5 22:27:08.087095 systemd[1569]: Reached target paths.target - Paths. Aug 5 22:27:08.087118 systemd[1569]: Reached target timers.target - Timers. Aug 5 22:27:08.093787 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:27:08.117001 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:27:08.117207 systemd[1569]: Reached target sockets.target - Sockets. Aug 5 22:27:08.117229 systemd[1569]: Reached target basic.target - Basic System. Aug 5 22:27:08.117305 systemd[1569]: Reached target default.target - Main User Target. Aug 5 22:27:08.117352 systemd[1569]: Startup finished in 203ms. Aug 5 22:27:08.121906 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:27:08.128761 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:27:08.220971 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:58454.service - OpenSSH per-connection server daemon (10.0.0.1:58454). Aug 5 22:27:08.297752 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 58454 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:27:08.300529 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:08.311389 systemd-logind[1445]: New session 2 of user core. Aug 5 22:27:08.329030 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:27:08.399257 sshd[1580]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:08.413212 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:58454.service: Deactivated successfully. Aug 5 22:27:08.416015 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:27:08.419872 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:27:08.434985 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:58466.service - OpenSSH per-connection server daemon (10.0.0.1:58466). Aug 5 22:27:08.436985 systemd-logind[1445]: Removed session 2. Aug 5 22:27:08.486863 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 58466 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:27:08.489420 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:08.497423 systemd-logind[1445]: New session 3 of user core. Aug 5 22:27:08.507061 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:27:08.576076 sshd[1587]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:08.590214 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:58466.service: Deactivated successfully. Aug 5 22:27:08.599229 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:27:08.604021 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:27:08.611659 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:58482.service - OpenSSH per-connection server daemon (10.0.0.1:58482). Aug 5 22:27:08.613905 systemd-logind[1445]: Removed session 3. Aug 5 22:27:08.658090 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 58482 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:27:08.660497 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:08.672472 systemd-logind[1445]: New session 4 of user core. Aug 5 22:27:08.688086 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:27:08.748563 sshd[1594]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:08.761766 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:58482.service: Deactivated successfully. Aug 5 22:27:08.764189 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:27:08.766347 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:27:08.777206 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:58486.service - OpenSSH per-connection server daemon (10.0.0.1:58486). Aug 5 22:27:08.778551 systemd-logind[1445]: Removed session 4. Aug 5 22:27:08.815488 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 58486 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:27:08.817318 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:08.821848 systemd-logind[1445]: New session 5 of user core. Aug 5 22:27:08.831917 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:27:08.894120 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:27:08.894450 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:27:08.911288 sudo[1604]: pam_unix(sudo:session): session closed for user root Aug 5 22:27:08.913460 sshd[1601]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:08.924321 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:58486.service: Deactivated successfully. Aug 5 22:27:08.926770 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:27:08.928769 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:27:08.937079 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:58492.service - OpenSSH per-connection server daemon (10.0.0.1:58492). Aug 5 22:27:08.938729 systemd-logind[1445]: Removed session 5. Aug 5 22:27:08.975762 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 58492 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:27:08.978077 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:08.983348 systemd-logind[1445]: New session 6 of user core. Aug 5 22:27:08.997068 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:27:09.055601 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:27:09.056047 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:27:09.062015 sudo[1613]: pam_unix(sudo:session): session closed for user root Aug 5 22:27:09.070049 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:27:09.070391 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:27:09.098315 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:27:09.101025 auditctl[1616]: No rules Aug 5 22:27:09.102860 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:27:09.103165 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:27:09.106063 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:27:09.159166 augenrules[1634]: No rules Aug 5 22:27:09.163529 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:27:09.165762 sudo[1612]: pam_unix(sudo:session): session closed for user root Aug 5 22:27:09.170241 sshd[1609]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:09.183106 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:58492.service: Deactivated successfully. Aug 5 22:27:09.186388 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:27:09.191042 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:27:09.208428 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:58502.service - OpenSSH per-connection server daemon (10.0.0.1:58502). Aug 5 22:27:09.209597 systemd-logind[1445]: Removed session 6. Aug 5 22:27:09.253519 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 58502 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:27:09.255835 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:09.260801 systemd-logind[1445]: New session 7 of user core. Aug 5 22:27:09.276034 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:27:09.335912 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:27:09.336332 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:27:09.511913 (dockerd)[1656]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:27:09.512135 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:27:10.325409 dockerd[1656]: time="2024-08-05T22:27:10.325199973Z" level=info msg="Starting up" Aug 5 22:27:10.468360 dockerd[1656]: time="2024-08-05T22:27:10.468266634Z" level=info msg="Loading containers: start." Aug 5 22:27:10.711731 kernel: Initializing XFRM netlink socket Aug 5 22:27:10.821124 systemd-networkd[1399]: docker0: Link UP Aug 5 22:27:10.878270 dockerd[1656]: time="2024-08-05T22:27:10.878185097Z" level=info msg="Loading containers: done." Aug 5 22:27:10.943944 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1775852810-merged.mount: Deactivated successfully. Aug 5 22:27:10.988374 dockerd[1656]: time="2024-08-05T22:27:10.988314709Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:27:10.988589 dockerd[1656]: time="2024-08-05T22:27:10.988550862Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:27:10.988753 dockerd[1656]: time="2024-08-05T22:27:10.988718787Z" level=info msg="Daemon has completed initialization" Aug 5 22:27:11.028558 dockerd[1656]: time="2024-08-05T22:27:11.028449316Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:27:11.028726 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:27:11.443616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:27:11.454888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:27:11.657951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:27:11.785706 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:27:12.086828 containerd[1455]: time="2024-08-05T22:27:12.085095715Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 22:27:12.516594 kubelet[1798]: E0805 22:27:12.516373 1798 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:27:12.525452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:27:12.525712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:27:13.299117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644649850.mount: Deactivated successfully. Aug 5 22:27:15.509606 containerd[1455]: time="2024-08-05T22:27:15.509513561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:15.512638 containerd[1455]: time="2024-08-05T22:27:15.512561818Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=34527317" Aug 5 22:27:15.514441 containerd[1455]: time="2024-08-05T22:27:15.514353989Z" level=info msg="ImageCreate event name:\"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:15.518554 containerd[1455]: time="2024-08-05T22:27:15.518489114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:15.520796 containerd[1455]: time="2024-08-05T22:27:15.520734284Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"34524117\" in 3.435571693s" Aug 5 22:27:15.520886 containerd[1455]: time="2024-08-05T22:27:15.520796401Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\"" Aug 5 22:27:15.556598 containerd[1455]: time="2024-08-05T22:27:15.556545493Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 22:27:19.745519 containerd[1455]: time="2024-08-05T22:27:19.745426710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:19.782407 containerd[1455]: time="2024-08-05T22:27:19.782274423Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=31847067" Aug 5 22:27:19.815483 containerd[1455]: time="2024-08-05T22:27:19.815415825Z" level=info msg="ImageCreate event name:\"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:19.863989 containerd[1455]: time="2024-08-05T22:27:19.863938112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:19.865260 containerd[1455]: time="2024-08-05T22:27:19.865206500Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"33397013\" in 4.308612888s" Aug 5 22:27:19.865260 containerd[1455]: time="2024-08-05T22:27:19.865239622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\"" Aug 5 22:27:19.989469 containerd[1455]: time="2024-08-05T22:27:19.989411739Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 22:27:21.882640 containerd[1455]: time="2024-08-05T22:27:21.882529180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:21.883461 containerd[1455]: time="2024-08-05T22:27:21.883415422Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=17097295" Aug 5 22:27:21.884718 containerd[1455]: time="2024-08-05T22:27:21.884630852Z" level=info msg="ImageCreate event name:\"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:21.887476 containerd[1455]: time="2024-08-05T22:27:21.887410545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:21.888821 containerd[1455]: time="2024-08-05T22:27:21.888776767Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"18647259\" in 1.899319682s" Aug 5 22:27:21.888881 containerd[1455]: time="2024-08-05T22:27:21.888821521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\"" Aug 5 22:27:21.948929 containerd[1455]: time="2024-08-05T22:27:21.948850737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 22:27:22.639942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:27:22.657084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:27:22.957968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:27:22.964568 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:27:23.267787 kubelet[1905]: E0805 22:27:23.267563 1905 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:27:23.272556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:27:23.272846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:27:23.619476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521044549.mount: Deactivated successfully. Aug 5 22:27:24.774372 containerd[1455]: time="2024-08-05T22:27:24.774241413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:24.776445 containerd[1455]: time="2024-08-05T22:27:24.776400893Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=28303769" Aug 5 22:27:24.778584 containerd[1455]: time="2024-08-05T22:27:24.778528423Z" level=info msg="ImageCreate event name:\"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:24.781858 containerd[1455]: time="2024-08-05T22:27:24.781769902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:24.782449 containerd[1455]: time="2024-08-05T22:27:24.782388612Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"28302788\" in 2.833486679s" Aug 5 22:27:24.782449 containerd[1455]: time="2024-08-05T22:27:24.782444707Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\"" Aug 5 22:27:24.880292 containerd[1455]: time="2024-08-05T22:27:24.880227575Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:27:25.524260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2636204153.mount: Deactivated successfully. Aug 5 22:27:25.538787 containerd[1455]: time="2024-08-05T22:27:25.538648599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:25.540868 containerd[1455]: time="2024-08-05T22:27:25.540789955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 5 22:27:25.542770 containerd[1455]: time="2024-08-05T22:27:25.542699095Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:25.545892 containerd[1455]: time="2024-08-05T22:27:25.545834736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:25.546807 containerd[1455]: time="2024-08-05T22:27:25.546718593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 666.431697ms" Aug 5 22:27:25.546807 containerd[1455]: time="2024-08-05T22:27:25.546783976Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:27:25.708109 containerd[1455]: time="2024-08-05T22:27:25.708036562Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:27:26.289093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount665630893.mount: Deactivated successfully. Aug 5 22:27:30.178594 containerd[1455]: time="2024-08-05T22:27:30.178488020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:30.179563 containerd[1455]: time="2024-08-05T22:27:30.179464044Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Aug 5 22:27:30.181194 containerd[1455]: time="2024-08-05T22:27:30.181147641Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:30.185475 containerd[1455]: time="2024-08-05T22:27:30.184973718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:30.186647 containerd[1455]: time="2024-08-05T22:27:30.186598422Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.478507577s" Aug 5 22:27:30.186647 containerd[1455]: time="2024-08-05T22:27:30.186636835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 5 22:27:30.212461 containerd[1455]: time="2024-08-05T22:27:30.212408408Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 22:27:30.856651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893490513.mount: Deactivated successfully. Aug 5 22:27:31.805786 containerd[1455]: time="2024-08-05T22:27:31.805674473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:31.813079 containerd[1455]: time="2024-08-05T22:27:31.813002424Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Aug 5 22:27:31.814817 containerd[1455]: time="2024-08-05T22:27:31.814737604Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:31.818105 containerd[1455]: time="2024-08-05T22:27:31.818014496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:27:31.818852 containerd[1455]: time="2024-08-05T22:27:31.818777762Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.606319458s" Aug 5 22:27:31.818852 containerd[1455]: time="2024-08-05T22:27:31.818829460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Aug 5 22:27:33.390064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 22:27:33.402928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:27:33.560104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:27:33.566709 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:27:33.622706 kubelet[2079]: E0805 22:27:33.622618 2079 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:27:33.628387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:27:33.628623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:27:33.919733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:27:33.932172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:27:33.955130 systemd[1]: Reloading requested from client PID 2094 ('systemctl') (unit session-7.scope)... Aug 5 22:27:33.955164 systemd[1]: Reloading... Aug 5 22:27:34.062728 zram_generator::config[2134]: No configuration found. Aug 5 22:27:34.839377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:27:34.959094 systemd[1]: Reloading finished in 1003 ms. Aug 5 22:27:35.020982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:27:35.025493 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:27:35.027938 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:27:35.028330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:27:35.043403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:27:35.215931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:27:35.221869 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:27:35.270220 kubelet[2181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:27:35.270220 kubelet[2181]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:27:35.270220 kubelet[2181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:27:35.270792 kubelet[2181]: I0805 22:27:35.270255 2181 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:27:35.741139 kubelet[2181]: I0805 22:27:35.741084 2181 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:27:35.741139 kubelet[2181]: I0805 22:27:35.741124 2181 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:27:35.741454 kubelet[2181]: I0805 22:27:35.741371 2181 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:27:35.767799 kubelet[2181]: I0805 22:27:35.767716 2181 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:27:35.768148 kubelet[2181]: E0805 22:27:35.768095 2181 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:35.785424 kubelet[2181]: I0805 22:27:35.785369 2181 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:27:35.787887 kubelet[2181]: I0805 22:27:35.787847 2181 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:27:35.788085 kubelet[2181]: I0805 22:27:35.788057 2181 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:27:35.788460 kubelet[2181]: I0805 22:27:35.788412 2181 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:27:35.788460 kubelet[2181]: I0805 22:27:35.788437 2181 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:27:35.789559 kubelet[2181]: I0805 22:27:35.789519 2181 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:27:35.791321 kubelet[2181]: I0805 22:27:35.791289 2181 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:27:35.791321 kubelet[2181]: I0805 22:27:35.791318 2181 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:27:35.791380 kubelet[2181]: I0805 22:27:35.791358 2181 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:27:35.791413 kubelet[2181]: I0805 22:27:35.791381 2181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:27:35.793023 kubelet[2181]: W0805 22:27:35.792958 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:35.793099 kubelet[2181]: E0805 22:27:35.793077 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:35.793443 kubelet[2181]: I0805 22:27:35.793411 2181 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:27:35.794182 kubelet[2181]: W0805 22:27:35.794146 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:35.794182 kubelet[2181]: E0805 22:27:35.794183 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:35.795542 kubelet[2181]: W0805 22:27:35.795517 2181 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:27:35.796371 kubelet[2181]: I0805 22:27:35.796334 2181 server.go:1232] "Started kubelet" Aug 5 22:27:35.797956 kubelet[2181]: I0805 22:27:35.797931 2181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:27:35.799968 kubelet[2181]: I0805 22:27:35.799438 2181 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:27:35.799968 kubelet[2181]: E0805 22:27:35.799878 2181 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17e8f58fe2604fe4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 27, 35, 796305892, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 27, 35, 796305892, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.26:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.26:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:27:35.800287 kubelet[2181]: E0805 22:27:35.800250 2181 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:27:35.800516 kubelet[2181]: E0805 22:27:35.800290 2181 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:27:35.800516 kubelet[2181]: I0805 22:27:35.800389 2181 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:27:35.800516 kubelet[2181]: I0805 22:27:35.800395 2181 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:27:35.800516 kubelet[2181]: I0805 22:27:35.800474 2181 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:27:35.800672 kubelet[2181]: I0805 22:27:35.800560 2181 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:27:35.800910 kubelet[2181]: W0805 22:27:35.800864 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:35.800910 kubelet[2181]: E0805 22:27:35.800911 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:35.801439 kubelet[2181]: E0805 22:27:35.801245 2181 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Aug 5 22:27:35.801439 kubelet[2181]: I0805 22:27:35.801374 2181 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:27:35.801936 kubelet[2181]: I0805 22:27:35.801617 2181 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:27:35.823889 kubelet[2181]: I0805 22:27:35.823836 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:27:35.825765 kubelet[2181]: I0805 22:27:35.825718 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:27:35.825820 kubelet[2181]: I0805 22:27:35.825783 2181 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:27:35.825820 kubelet[2181]: I0805 22:27:35.825816 2181 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:27:35.825912 kubelet[2181]: E0805 22:27:35.825887 2181 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:27:35.826985 kubelet[2181]: W0805 22:27:35.826932 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:35.827035 kubelet[2181]: E0805 22:27:35.826992 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:35.838876 kubelet[2181]: I0805 22:27:35.838825 2181 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:27:35.838876 kubelet[2181]: I0805 22:27:35.838853 2181 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:27:35.838876 kubelet[2181]: I0805 22:27:35.838880 2181 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:27:35.902604 kubelet[2181]: I0805 22:27:35.902544 2181 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:27:35.903024 kubelet[2181]: E0805 22:27:35.902989 2181 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Aug 5 22:27:35.926262 kubelet[2181]: E0805 22:27:35.926157 2181 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:27:36.002055 kubelet[2181]: E0805 22:27:36.001881 2181 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Aug 5 22:27:36.104839 kubelet[2181]: I0805 22:27:36.104794 2181 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:27:36.105303 kubelet[2181]: E0805 22:27:36.105248 2181 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Aug 5 22:27:36.126291 kubelet[2181]: E0805 22:27:36.126266 2181 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:27:36.403107 kubelet[2181]: E0805 22:27:36.403074 2181 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Aug 5 22:27:36.470816 kubelet[2181]: I0805 22:27:36.470660 2181 policy_none.go:49] "None policy: Start" Aug 5 22:27:36.471804 kubelet[2181]: I0805 22:27:36.471772 2181 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:27:36.471923 kubelet[2181]: I0805 22:27:36.471848 2181 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:27:36.482671 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:27:36.497898 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:27:36.501338 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:27:36.507173 kubelet[2181]: I0805 22:27:36.507114 2181 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:27:36.507643 kubelet[2181]: E0805 22:27:36.507585 2181 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Aug 5 22:27:36.513078 kubelet[2181]: I0805 22:27:36.513034 2181 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:27:36.513517 kubelet[2181]: I0805 22:27:36.513493 2181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:27:36.514119 kubelet[2181]: E0805 22:27:36.514095 2181 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 22:27:36.526814 kubelet[2181]: I0805 22:27:36.526732 2181 topology_manager.go:215] "Topology Admit Handler" podUID="8c77dfc4a990b64c01712daacf9d15ec" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:27:36.528410 kubelet[2181]: I0805 22:27:36.528359 2181 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:27:36.529763 kubelet[2181]: I0805 22:27:36.529722 2181 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:27:36.536102 systemd[1]: Created slice kubepods-burstable-pod8c77dfc4a990b64c01712daacf9d15ec.slice - libcontainer container kubepods-burstable-pod8c77dfc4a990b64c01712daacf9d15ec.slice. Aug 5 22:27:36.557707 systemd[1]: Created slice kubepods-burstable-pod09d96cdeded1d5a51a9712d8a1a0b54a.slice - libcontainer container kubepods-burstable-pod09d96cdeded1d5a51a9712d8a1a0b54a.slice. Aug 5 22:27:36.563429 systemd[1]: Created slice kubepods-burstable-pod0cc03c154af91f38c5530287ae9cc549.slice - libcontainer container kubepods-burstable-pod0cc03c154af91f38c5530287ae9cc549.slice. Aug 5 22:27:36.605701 kubelet[2181]: I0805 22:27:36.605599 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:27:36.605701 kubelet[2181]: I0805 22:27:36.605662 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c77dfc4a990b64c01712daacf9d15ec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c77dfc4a990b64c01712daacf9d15ec\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:27:36.605701 kubelet[2181]: I0805 22:27:36.605707 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c77dfc4a990b64c01712daacf9d15ec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c77dfc4a990b64c01712daacf9d15ec\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:27:36.605946 kubelet[2181]: I0805 22:27:36.605732 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:36.605946 kubelet[2181]: I0805 22:27:36.605760 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:36.605946 kubelet[2181]: I0805 22:27:36.605781 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:36.605946 kubelet[2181]: I0805 22:27:36.605801 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:36.605946 kubelet[2181]: I0805 22:27:36.605820 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:36.606096 kubelet[2181]: I0805 22:27:36.605862 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c77dfc4a990b64c01712daacf9d15ec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c77dfc4a990b64c01712daacf9d15ec\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:27:36.687587 kubelet[2181]: W0805 22:27:36.687366 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:36.687587 kubelet[2181]: E0805 22:27:36.687450 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:36.856124 kubelet[2181]: E0805 22:27:36.856075 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:36.857073 containerd[1455]: time="2024-08-05T22:27:36.857028714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c77dfc4a990b64c01712daacf9d15ec,Namespace:kube-system,Attempt:0,}" Aug 5 22:27:36.861357 kubelet[2181]: E0805 22:27:36.861302 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:36.861902 containerd[1455]: time="2024-08-05T22:27:36.861866691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,}" Aug 5 22:27:36.866219 kubelet[2181]: E0805 22:27:36.866183 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:36.866592 containerd[1455]: time="2024-08-05T22:27:36.866549122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,}" Aug 5 22:27:37.161522 kubelet[2181]: W0805 22:27:37.161476 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:37.161522 kubelet[2181]: E0805 22:27:37.161522 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:37.204599 kubelet[2181]: E0805 22:27:37.204542 2181 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" Aug 5 22:27:37.254339 kubelet[2181]: W0805 22:27:37.254264 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:37.254339 kubelet[2181]: E0805 22:27:37.254341 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:37.308992 kubelet[2181]: I0805 22:27:37.308936 2181 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:27:37.309293 kubelet[2181]: E0805 22:27:37.309259 2181 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Aug 5 22:27:37.339175 kubelet[2181]: W0805 22:27:37.339078 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:37.339175 kubelet[2181]: E0805 22:27:37.339152 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:37.585058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408838486.mount: Deactivated successfully. Aug 5 22:27:37.856290 kubelet[2181]: E0805 22:27:37.856142 2181 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:37.887903 containerd[1455]: time="2024-08-05T22:27:37.887831968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:27:37.920127 containerd[1455]: time="2024-08-05T22:27:37.920046972Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:27:37.965794 containerd[1455]: time="2024-08-05T22:27:37.965614493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 5 22:27:37.969866 containerd[1455]: time="2024-08-05T22:27:37.969771120Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:27:37.970920 containerd[1455]: time="2024-08-05T22:27:37.970843794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:27:37.973481 containerd[1455]: time="2024-08-05T22:27:37.973414763Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:27:37.974567 containerd[1455]: time="2024-08-05T22:27:37.974512495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:27:37.977356 containerd[1455]: time="2024-08-05T22:27:37.977277421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:27:37.979699 containerd[1455]: time="2024-08-05T22:27:37.979622321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.117653176s" Aug 5 22:27:37.980500 containerd[1455]: time="2024-08-05T22:27:37.980421538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.123261795s" Aug 5 22:27:37.983939 containerd[1455]: time="2024-08-05T22:27:37.983890350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.117265172s" Aug 5 22:27:38.143713 containerd[1455]: time="2024-08-05T22:27:38.143558540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:27:38.143713 containerd[1455]: time="2024-08-05T22:27:38.143657338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:38.143951 containerd[1455]: time="2024-08-05T22:27:38.143724164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:27:38.143951 containerd[1455]: time="2024-08-05T22:27:38.143759652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:38.144757 containerd[1455]: time="2024-08-05T22:27:38.144370851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:27:38.144757 containerd[1455]: time="2024-08-05T22:27:38.144421056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:38.144757 containerd[1455]: time="2024-08-05T22:27:38.144443698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:27:38.144757 containerd[1455]: time="2024-08-05T22:27:38.144460882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:38.144757 containerd[1455]: time="2024-08-05T22:27:38.144596218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:27:38.146184 containerd[1455]: time="2024-08-05T22:27:38.144693683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:38.146184 containerd[1455]: time="2024-08-05T22:27:38.144757103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:27:38.146184 containerd[1455]: time="2024-08-05T22:27:38.144769506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:38.300528 systemd[1]: Started cri-containerd-9b4baf3d9f6dead6b249385b506a3f6520693adfc800cea439e5c5b0a50420d1.scope - libcontainer container 9b4baf3d9f6dead6b249385b506a3f6520693adfc800cea439e5c5b0a50420d1. Aug 5 22:27:38.310272 systemd[1]: Started cri-containerd-74d4f166588b5e2fe01a353d9571756c3ca1a1854077b51cec3f2ee41b3da81b.scope - libcontainer container 74d4f166588b5e2fe01a353d9571756c3ca1a1854077b51cec3f2ee41b3da81b. Aug 5 22:27:38.358109 systemd[1]: Started cri-containerd-87cc731fd64d1200dd039d25ccbe12e7612a1aa85f51c3d35264e86e9006c44b.scope - libcontainer container 87cc731fd64d1200dd039d25ccbe12e7612a1aa85f51c3d35264e86e9006c44b. Aug 5 22:27:38.532000 containerd[1455]: time="2024-08-05T22:27:38.531807053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b4baf3d9f6dead6b249385b506a3f6520693adfc800cea439e5c5b0a50420d1\"" Aug 5 22:27:38.534787 kubelet[2181]: E0805 22:27:38.534392 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:38.544938 containerd[1455]: time="2024-08-05T22:27:38.544106784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,} returns sandbox id \"74d4f166588b5e2fe01a353d9571756c3ca1a1854077b51cec3f2ee41b3da81b\"" Aug 5 22:27:38.545244 containerd[1455]: time="2024-08-05T22:27:38.545196440Z" level=info msg="CreateContainer within sandbox \"9b4baf3d9f6dead6b249385b506a3f6520693adfc800cea439e5c5b0a50420d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:27:38.546344 kubelet[2181]: E0805 22:27:38.546312 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:38.552048 containerd[1455]: time="2024-08-05T22:27:38.552002524Z" level=info msg="CreateContainer within sandbox \"74d4f166588b5e2fe01a353d9571756c3ca1a1854077b51cec3f2ee41b3da81b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:27:38.687790 kubelet[2181]: E0805 22:27:38.687619 2181 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17e8f58fe2604fe4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 27, 35, 796305892, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 27, 35, 796305892, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.26:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.26:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:27:38.805375 kubelet[2181]: E0805 22:27:38.805197 2181 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="3.2s" Aug 5 22:27:38.912132 kubelet[2181]: I0805 22:27:38.912073 2181 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:27:38.912836 kubelet[2181]: E0805 22:27:38.912767 2181 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Aug 5 22:27:38.940216 containerd[1455]: time="2024-08-05T22:27:38.940140047Z" level=info msg="CreateContainer within sandbox \"9b4baf3d9f6dead6b249385b506a3f6520693adfc800cea439e5c5b0a50420d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"11d1ebeedfb085cfc4e5640409f2bdb0d75e1e974bffbe3f247ef58f24b3d33e\"" Aug 5 22:27:38.942715 containerd[1455]: time="2024-08-05T22:27:38.941403873Z" level=info msg="StartContainer for \"11d1ebeedfb085cfc4e5640409f2bdb0d75e1e974bffbe3f247ef58f24b3d33e\"" Aug 5 22:27:38.944382 containerd[1455]: time="2024-08-05T22:27:38.943887142Z" level=info msg="CreateContainer within sandbox \"74d4f166588b5e2fe01a353d9571756c3ca1a1854077b51cec3f2ee41b3da81b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"342b2c1ddf14985e2bdbbbc42d8736940c03374c582413b3ffcbb6684b829938\"" Aug 5 22:27:38.944809 containerd[1455]: time="2024-08-05T22:27:38.944769986Z" level=info msg="StartContainer for \"342b2c1ddf14985e2bdbbbc42d8736940c03374c582413b3ffcbb6684b829938\"" Aug 5 22:27:38.945068 containerd[1455]: time="2024-08-05T22:27:38.945032002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c77dfc4a990b64c01712daacf9d15ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"87cc731fd64d1200dd039d25ccbe12e7612a1aa85f51c3d35264e86e9006c44b\"" Aug 5 22:27:38.946902 kubelet[2181]: E0805 22:27:38.946869 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:38.949248 containerd[1455]: time="2024-08-05T22:27:38.949212719Z" level=info msg="CreateContainer within sandbox \"87cc731fd64d1200dd039d25ccbe12e7612a1aa85f51c3d35264e86e9006c44b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:27:38.975516 containerd[1455]: time="2024-08-05T22:27:38.975305332Z" level=info msg="CreateContainer within sandbox \"87cc731fd64d1200dd039d25ccbe12e7612a1aa85f51c3d35264e86e9006c44b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e10c6841d7c581306272c3362dd967c0a8360508a4ecfd015dab98949c073709\"" Aug 5 22:27:38.975985 containerd[1455]: time="2024-08-05T22:27:38.975933092Z" level=info msg="StartContainer for \"e10c6841d7c581306272c3362dd967c0a8360508a4ecfd015dab98949c073709\"" Aug 5 22:27:38.978897 systemd[1]: Started cri-containerd-11d1ebeedfb085cfc4e5640409f2bdb0d75e1e974bffbe3f247ef58f24b3d33e.scope - libcontainer container 11d1ebeedfb085cfc4e5640409f2bdb0d75e1e974bffbe3f247ef58f24b3d33e. Aug 5 22:27:38.983534 systemd[1]: Started cri-containerd-342b2c1ddf14985e2bdbbbc42d8736940c03374c582413b3ffcbb6684b829938.scope - libcontainer container 342b2c1ddf14985e2bdbbbc42d8736940c03374c582413b3ffcbb6684b829938. Aug 5 22:27:39.206371 kubelet[2181]: W0805 22:27:39.206286 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:39.206371 kubelet[2181]: E0805 22:27:39.206374 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:39.225912 systemd[1]: Started cri-containerd-e10c6841d7c581306272c3362dd967c0a8360508a4ecfd015dab98949c073709.scope - libcontainer container e10c6841d7c581306272c3362dd967c0a8360508a4ecfd015dab98949c073709. Aug 5 22:27:39.575179 kubelet[2181]: W0805 22:27:39.574650 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:39.575179 kubelet[2181]: E0805 22:27:39.574758 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:39.575720 kubelet[2181]: W0805 22:27:39.575578 2181 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:39.575720 kubelet[2181]: E0805 22:27:39.575650 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 5 22:27:39.587830 containerd[1455]: time="2024-08-05T22:27:39.587763047Z" level=info msg="StartContainer for \"11d1ebeedfb085cfc4e5640409f2bdb0d75e1e974bffbe3f247ef58f24b3d33e\" returns successfully" Aug 5 22:27:39.624703 containerd[1455]: time="2024-08-05T22:27:39.624167124Z" level=info msg="StartContainer for \"342b2c1ddf14985e2bdbbbc42d8736940c03374c582413b3ffcbb6684b829938\" returns successfully" Aug 5 22:27:39.635634 containerd[1455]: time="2024-08-05T22:27:39.635548909Z" level=info msg="StartContainer for \"e10c6841d7c581306272c3362dd967c0a8360508a4ecfd015dab98949c073709\" returns successfully" Aug 5 22:27:39.842171 kubelet[2181]: E0805 22:27:39.841718 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:39.844222 kubelet[2181]: E0805 22:27:39.844195 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:39.846474 kubelet[2181]: E0805 22:27:39.846407 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:40.848355 kubelet[2181]: E0805 22:27:40.848305 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:40.848812 kubelet[2181]: E0805 22:27:40.848748 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:40.849630 kubelet[2181]: E0805 22:27:40.849573 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:41.395747 kubelet[2181]: E0805 22:27:41.395667 2181 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 5 22:27:41.755641 kubelet[2181]: E0805 22:27:41.755486 2181 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 5 22:27:41.849736 kubelet[2181]: E0805 22:27:41.849675 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:42.010210 kubelet[2181]: E0805 22:27:42.010063 2181 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 5 22:27:42.115226 kubelet[2181]: I0805 22:27:42.115189 2181 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:27:42.155703 kubelet[2181]: I0805 22:27:42.155627 2181 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 22:27:42.197342 kubelet[2181]: E0805 22:27:42.197285 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:42.298582 kubelet[2181]: E0805 22:27:42.298395 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:42.361906 update_engine[1446]: I0805 22:27:42.361783 1446 update_attempter.cc:509] Updating boot flags... Aug 5 22:27:42.398985 kubelet[2181]: E0805 22:27:42.398894 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:42.499339 kubelet[2181]: E0805 22:27:42.499266 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:42.516902 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2470) Aug 5 22:27:42.599878 kubelet[2181]: E0805 22:27:42.599475 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:42.617933 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2470) Aug 5 22:27:42.700122 kubelet[2181]: E0805 22:27:42.700030 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:42.800950 kubelet[2181]: E0805 22:27:42.800905 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:42.901119 kubelet[2181]: E0805 22:27:42.901052 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:43.001600 kubelet[2181]: E0805 22:27:43.001536 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:43.102320 kubelet[2181]: E0805 22:27:43.102251 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:43.203263 kubelet[2181]: E0805 22:27:43.203089 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:43.304115 kubelet[2181]: E0805 22:27:43.304031 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:43.404894 kubelet[2181]: E0805 22:27:43.404812 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:43.505585 kubelet[2181]: E0805 22:27:43.505420 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:43.606390 kubelet[2181]: E0805 22:27:43.606317 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:43.706938 kubelet[2181]: E0805 22:27:43.706833 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:43.808112 kubelet[2181]: E0805 22:27:43.807923 2181 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:44.797467 kubelet[2181]: I0805 22:27:44.797388 2181 apiserver.go:52] "Watching apiserver" Aug 5 22:27:44.800649 kubelet[2181]: I0805 22:27:44.800616 2181 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:27:44.933959 systemd[1]: Reloading requested from client PID 2478 ('systemctl') (unit session-7.scope)... Aug 5 22:27:44.933980 systemd[1]: Reloading... Aug 5 22:27:45.030734 zram_generator::config[2521]: No configuration found. Aug 5 22:27:45.160967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:27:45.276781 systemd[1]: Reloading finished in 342 ms. Aug 5 22:27:45.325064 kubelet[2181]: I0805 22:27:45.324913 2181 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:27:45.324952 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:27:45.342975 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:27:45.343387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:27:45.343474 systemd[1]: kubelet.service: Consumed 1.190s CPU time, 114.0M memory peak, 0B memory swap peak. Aug 5 22:27:45.352072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:27:45.523146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:27:45.536402 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:27:45.645410 kubelet[2560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:27:45.645410 kubelet[2560]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:27:45.645410 kubelet[2560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:27:45.646004 kubelet[2560]: I0805 22:27:45.645438 2560 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:27:45.651809 kubelet[2560]: I0805 22:27:45.651761 2560 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:27:45.651809 kubelet[2560]: I0805 22:27:45.651791 2560 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:27:45.652104 kubelet[2560]: I0805 22:27:45.652083 2560 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:27:45.653800 kubelet[2560]: I0805 22:27:45.653721 2560 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:27:45.655195 kubelet[2560]: I0805 22:27:45.655161 2560 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:27:45.673386 kubelet[2560]: I0805 22:27:45.671673 2560 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:27:45.673386 kubelet[2560]: I0805 22:27:45.671961 2560 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:27:45.673386 kubelet[2560]: I0805 22:27:45.672138 2560 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:27:45.673386 kubelet[2560]: I0805 22:27:45.672172 2560 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:27:45.673386 kubelet[2560]: I0805 22:27:45.672184 2560 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:27:45.673386 kubelet[2560]: I0805 22:27:45.672251 2560 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:27:45.673822 kubelet[2560]: I0805 22:27:45.672374 2560 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:27:45.673822 kubelet[2560]: I0805 22:27:45.672390 2560 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:27:45.673822 kubelet[2560]: I0805 22:27:45.672505 2560 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:27:45.673822 kubelet[2560]: I0805 22:27:45.672537 2560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:27:45.674088 kubelet[2560]: I0805 22:27:45.674057 2560 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:27:45.679715 kubelet[2560]: I0805 22:27:45.677338 2560 server.go:1232] "Started kubelet" Aug 5 22:27:45.680098 kubelet[2560]: I0805 22:27:45.680073 2560 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:27:45.680588 kubelet[2560]: I0805 22:27:45.680542 2560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:27:45.684354 kubelet[2560]: I0805 22:27:45.684323 2560 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:27:45.685864 kubelet[2560]: E0805 22:27:45.685843 2560 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:27:45.685966 kubelet[2560]: E0805 22:27:45.685955 2560 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:27:45.686810 kubelet[2560]: I0805 22:27:45.686779 2560 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:27:45.687097 kubelet[2560]: I0805 22:27:45.687064 2560 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:27:45.687595 kubelet[2560]: E0805 22:27:45.687338 2560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:27:45.687595 kubelet[2560]: I0805 22:27:45.687408 2560 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:27:45.687595 kubelet[2560]: I0805 22:27:45.687509 2560 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:27:45.687734 kubelet[2560]: I0805 22:27:45.687724 2560 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:27:45.695129 kubelet[2560]: I0805 22:27:45.695094 2560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:27:45.699110 kubelet[2560]: I0805 22:27:45.699072 2560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:27:45.699268 kubelet[2560]: I0805 22:27:45.699247 2560 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:27:45.699403 kubelet[2560]: I0805 22:27:45.699391 2560 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:27:45.699521 kubelet[2560]: E0805 22:27:45.699508 2560 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:27:45.764226 kubelet[2560]: I0805 22:27:45.764171 2560 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:27:45.764226 kubelet[2560]: I0805 22:27:45.764207 2560 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:27:45.764432 kubelet[2560]: I0805 22:27:45.764247 2560 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:27:45.764512 kubelet[2560]: I0805 22:27:45.764483 2560 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:27:45.764566 kubelet[2560]: I0805 22:27:45.764518 2560 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:27:45.764566 kubelet[2560]: I0805 22:27:45.764534 2560 policy_none.go:49] "None policy: Start" Aug 5 22:27:45.765361 kubelet[2560]: I0805 22:27:45.765331 2560 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:27:45.765426 kubelet[2560]: I0805 22:27:45.765381 2560 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:27:45.765586 kubelet[2560]: I0805 22:27:45.765569 2560 state_mem.go:75] "Updated machine memory state" Aug 5 22:27:45.771020 kubelet[2560]: I0805 22:27:45.770914 2560 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:27:45.771561 kubelet[2560]: I0805 22:27:45.771548 2560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:27:45.793559 kubelet[2560]: I0805 22:27:45.793401 2560 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:27:45.801737 kubelet[2560]: I0805 22:27:45.800997 2560 topology_manager.go:215] "Topology Admit Handler" podUID="8c77dfc4a990b64c01712daacf9d15ec" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:27:45.801938 kubelet[2560]: I0805 22:27:45.801866 2560 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Aug 5 22:27:45.802004 kubelet[2560]: I0805 22:27:45.801947 2560 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 22:27:45.802354 kubelet[2560]: I0805 22:27:45.802109 2560 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:27:45.802354 kubelet[2560]: I0805 22:27:45.802191 2560 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:27:45.888123 kubelet[2560]: I0805 22:27:45.888066 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:27:45.888342 kubelet[2560]: I0805 22:27:45.888126 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c77dfc4a990b64c01712daacf9d15ec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c77dfc4a990b64c01712daacf9d15ec\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:27:45.888342 kubelet[2560]: I0805 22:27:45.888215 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c77dfc4a990b64c01712daacf9d15ec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c77dfc4a990b64c01712daacf9d15ec\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:27:45.888342 kubelet[2560]: I0805 22:27:45.888243 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:45.888342 kubelet[2560]: I0805 22:27:45.888272 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:45.888342 kubelet[2560]: I0805 22:27:45.888298 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:45.888508 kubelet[2560]: I0805 22:27:45.888323 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:45.888508 kubelet[2560]: I0805 22:27:45.888351 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c77dfc4a990b64c01712daacf9d15ec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c77dfc4a990b64c01712daacf9d15ec\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:27:45.888508 kubelet[2560]: I0805 22:27:45.888426 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:27:46.116128 kubelet[2560]: E0805 22:27:46.115945 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:46.116721 kubelet[2560]: E0805 22:27:46.116560 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:46.116818 kubelet[2560]: E0805 22:27:46.116798 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:46.673824 kubelet[2560]: I0805 22:27:46.673760 2560 apiserver.go:52] "Watching apiserver" Aug 5 22:27:46.688451 kubelet[2560]: I0805 22:27:46.688390 2560 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:27:46.716415 kubelet[2560]: E0805 22:27:46.716381 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:46.716628 kubelet[2560]: E0805 22:27:46.716606 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:46.721300 kubelet[2560]: E0805 22:27:46.721270 2560 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 22:27:46.721732 kubelet[2560]: E0805 22:27:46.721712 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:46.745215 kubelet[2560]: I0805 22:27:46.744991 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.744754913 podCreationTimestamp="2024-08-05 22:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:27:46.74457361 +0000 UTC m=+1.161251874" watchObservedRunningTime="2024-08-05 22:27:46.744754913 +0000 UTC m=+1.161433167" Aug 5 22:27:46.745215 kubelet[2560]: I0805 22:27:46.745120 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.745094755 podCreationTimestamp="2024-08-05 22:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:27:46.739819066 +0000 UTC m=+1.156497320" watchObservedRunningTime="2024-08-05 22:27:46.745094755 +0000 UTC m=+1.161773009" Aug 5 22:27:46.759871 kubelet[2560]: I0805 22:27:46.759831 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.759794743 podCreationTimestamp="2024-08-05 22:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:27:46.753882042 +0000 UTC m=+1.170560296" watchObservedRunningTime="2024-08-05 22:27:46.759794743 +0000 UTC m=+1.176472997" Aug 5 22:27:47.718353 kubelet[2560]: E0805 22:27:47.718275 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:48.370944 kubelet[2560]: E0805 22:27:48.370539 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:48.730608 kubelet[2560]: E0805 22:27:48.730530 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:51.507657 sudo[1645]: pam_unix(sudo:session): session closed for user root Aug 5 22:27:51.512251 sshd[1642]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:51.517033 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:58502.service: Deactivated successfully. Aug 5 22:27:51.519600 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:27:51.519937 systemd[1]: session-7.scope: Consumed 5.775s CPU time, 140.5M memory peak, 0B memory swap peak. Aug 5 22:27:51.520500 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:27:51.521836 systemd-logind[1445]: Removed session 7. Aug 5 22:27:53.042228 kubelet[2560]: E0805 22:27:53.041345 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:53.732289 kubelet[2560]: E0805 22:27:53.732251 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:56.968151 kubelet[2560]: E0805 22:27:56.968056 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:58.066453 kubelet[2560]: I0805 22:27:58.066386 2560 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:27:58.068176 containerd[1455]: time="2024-08-05T22:27:58.068116935Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:27:58.069316 kubelet[2560]: I0805 22:27:58.068930 2560 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:27:58.375060 kubelet[2560]: E0805 22:27:58.374927 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:58.757117 kubelet[2560]: I0805 22:27:58.756468 2560 topology_manager.go:215] "Topology Admit Handler" podUID="250026e2-deeb-43a3-8c09-39bbe7080a62" podNamespace="kube-system" podName="kube-proxy-8j924" Aug 5 22:27:58.773731 systemd[1]: Created slice kubepods-besteffort-pod250026e2_deeb_43a3_8c09_39bbe7080a62.slice - libcontainer container kubepods-besteffort-pod250026e2_deeb_43a3_8c09_39bbe7080a62.slice. Aug 5 22:27:58.863974 kubelet[2560]: I0805 22:27:58.863853 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250026e2-deeb-43a3-8c09-39bbe7080a62-lib-modules\") pod \"kube-proxy-8j924\" (UID: \"250026e2-deeb-43a3-8c09-39bbe7080a62\") " pod="kube-system/kube-proxy-8j924" Aug 5 22:27:58.863974 kubelet[2560]: I0805 22:27:58.863913 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcmrt\" (UniqueName: \"kubernetes.io/projected/250026e2-deeb-43a3-8c09-39bbe7080a62-kube-api-access-wcmrt\") pod \"kube-proxy-8j924\" (UID: \"250026e2-deeb-43a3-8c09-39bbe7080a62\") " pod="kube-system/kube-proxy-8j924" Aug 5 22:27:58.864308 kubelet[2560]: I0805 22:27:58.864077 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/250026e2-deeb-43a3-8c09-39bbe7080a62-kube-proxy\") pod \"kube-proxy-8j924\" (UID: \"250026e2-deeb-43a3-8c09-39bbe7080a62\") " pod="kube-system/kube-proxy-8j924" Aug 5 22:27:58.865837 kubelet[2560]: I0805 22:27:58.865818 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250026e2-deeb-43a3-8c09-39bbe7080a62-xtables-lock\") pod \"kube-proxy-8j924\" (UID: \"250026e2-deeb-43a3-8c09-39bbe7080a62\") " pod="kube-system/kube-proxy-8j924" Aug 5 22:27:59.069545 kubelet[2560]: I0805 22:27:59.069349 2560 topology_manager.go:215] "Topology Admit Handler" podUID="5651f602-7ec9-4bf6-9f2f-d9a96beb7fad" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-cv7vq" Aug 5 22:27:59.079611 systemd[1]: Created slice kubepods-besteffort-pod5651f602_7ec9_4bf6_9f2f_d9a96beb7fad.slice - libcontainer container kubepods-besteffort-pod5651f602_7ec9_4bf6_9f2f_d9a96beb7fad.slice. Aug 5 22:27:59.082880 kubelet[2560]: E0805 22:27:59.082839 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:59.084199 containerd[1455]: time="2024-08-05T22:27:59.083754445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8j924,Uid:250026e2-deeb-43a3-8c09-39bbe7080a62,Namespace:kube-system,Attempt:0,}" Aug 5 22:27:59.127671 containerd[1455]: time="2024-08-05T22:27:59.127046163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:27:59.127933 containerd[1455]: time="2024-08-05T22:27:59.127644548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:59.127933 containerd[1455]: time="2024-08-05T22:27:59.127912893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:27:59.128090 containerd[1455]: time="2024-08-05T22:27:59.128056123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:59.155897 systemd[1]: Started cri-containerd-3f607069355583f81733049eee17c0328d17acb89da59f4bab7d26b02c5889ae.scope - libcontainer container 3f607069355583f81733049eee17c0328d17acb89da59f4bab7d26b02c5889ae. Aug 5 22:27:59.170079 kubelet[2560]: I0805 22:27:59.168763 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8wvw\" (UniqueName: \"kubernetes.io/projected/5651f602-7ec9-4bf6-9f2f-d9a96beb7fad-kube-api-access-x8wvw\") pod \"tigera-operator-76c4974c85-cv7vq\" (UID: \"5651f602-7ec9-4bf6-9f2f-d9a96beb7fad\") " pod="tigera-operator/tigera-operator-76c4974c85-cv7vq" Aug 5 22:27:59.170079 kubelet[2560]: I0805 22:27:59.168838 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5651f602-7ec9-4bf6-9f2f-d9a96beb7fad-var-lib-calico\") pod \"tigera-operator-76c4974c85-cv7vq\" (UID: \"5651f602-7ec9-4bf6-9f2f-d9a96beb7fad\") " pod="tigera-operator/tigera-operator-76c4974c85-cv7vq" Aug 5 22:27:59.187779 containerd[1455]: time="2024-08-05T22:27:59.187711686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8j924,Uid:250026e2-deeb-43a3-8c09-39bbe7080a62,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f607069355583f81733049eee17c0328d17acb89da59f4bab7d26b02c5889ae\"" Aug 5 22:27:59.188641 kubelet[2560]: E0805 22:27:59.188615 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:27:59.191250 containerd[1455]: time="2024-08-05T22:27:59.191209835Z" level=info msg="CreateContainer within sandbox \"3f607069355583f81733049eee17c0328d17acb89da59f4bab7d26b02c5889ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:27:59.237941 containerd[1455]: time="2024-08-05T22:27:59.237872192Z" level=info msg="CreateContainer within sandbox \"3f607069355583f81733049eee17c0328d17acb89da59f4bab7d26b02c5889ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce90fa15313e66912b109dc96512ed39b1b8ee2f6c9c48eb309e32e537d75f64\"" Aug 5 22:27:59.238620 containerd[1455]: time="2024-08-05T22:27:59.238580264Z" level=info msg="StartContainer for \"ce90fa15313e66912b109dc96512ed39b1b8ee2f6c9c48eb309e32e537d75f64\"" Aug 5 22:27:59.274036 systemd[1]: Started cri-containerd-ce90fa15313e66912b109dc96512ed39b1b8ee2f6c9c48eb309e32e537d75f64.scope - libcontainer container ce90fa15313e66912b109dc96512ed39b1b8ee2f6c9c48eb309e32e537d75f64. Aug 5 22:27:59.327754 containerd[1455]: time="2024-08-05T22:27:59.327600764Z" level=info msg="StartContainer for \"ce90fa15313e66912b109dc96512ed39b1b8ee2f6c9c48eb309e32e537d75f64\" returns successfully" Aug 5 22:27:59.385028 containerd[1455]: time="2024-08-05T22:27:59.384970378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-cv7vq,Uid:5651f602-7ec9-4bf6-9f2f-d9a96beb7fad,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:27:59.428128 containerd[1455]: time="2024-08-05T22:27:59.428015583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:27:59.428128 containerd[1455]: time="2024-08-05T22:27:59.428072700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:59.428128 containerd[1455]: time="2024-08-05T22:27:59.428088780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:27:59.428128 containerd[1455]: time="2024-08-05T22:27:59.428099129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:27:59.454001 systemd[1]: Started cri-containerd-f67cf4efd79ecf650bd07324135802393bd14697bfa9b471179efb437e1a90de.scope - libcontainer container f67cf4efd79ecf650bd07324135802393bd14697bfa9b471179efb437e1a90de. Aug 5 22:27:59.495623 containerd[1455]: time="2024-08-05T22:27:59.495581333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-cv7vq,Uid:5651f602-7ec9-4bf6-9f2f-d9a96beb7fad,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f67cf4efd79ecf650bd07324135802393bd14697bfa9b471179efb437e1a90de\"" Aug 5 22:27:59.497362 containerd[1455]: time="2024-08-05T22:27:59.497325042Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:27:59.745190 kubelet[2560]: E0805 22:27:59.745157 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:00.840586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604422025.mount: Deactivated successfully. Aug 5 22:28:01.533408 containerd[1455]: time="2024-08-05T22:28:01.533303298Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:01.538708 containerd[1455]: time="2024-08-05T22:28:01.538577193Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Aug 5 22:28:01.567515 containerd[1455]: time="2024-08-05T22:28:01.567447490Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:01.597159 containerd[1455]: time="2024-08-05T22:28:01.597091834Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:01.598019 containerd[1455]: time="2024-08-05T22:28:01.597960517Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.100604867s" Aug 5 22:28:01.598019 containerd[1455]: time="2024-08-05T22:28:01.597994390Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:28:01.599630 containerd[1455]: time="2024-08-05T22:28:01.599598375Z" level=info msg="CreateContainer within sandbox \"f67cf4efd79ecf650bd07324135802393bd14697bfa9b471179efb437e1a90de\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:28:01.880454 containerd[1455]: time="2024-08-05T22:28:01.879910675Z" level=info msg="CreateContainer within sandbox \"f67cf4efd79ecf650bd07324135802393bd14697bfa9b471179efb437e1a90de\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8dc49ee3511404a378d14b22521bcf28cac7b41dbc0be40015dd925273b18856\"" Aug 5 22:28:01.881022 containerd[1455]: time="2024-08-05T22:28:01.880987610Z" level=info msg="StartContainer for \"8dc49ee3511404a378d14b22521bcf28cac7b41dbc0be40015dd925273b18856\"" Aug 5 22:28:01.921977 systemd[1]: Started cri-containerd-8dc49ee3511404a378d14b22521bcf28cac7b41dbc0be40015dd925273b18856.scope - libcontainer container 8dc49ee3511404a378d14b22521bcf28cac7b41dbc0be40015dd925273b18856. Aug 5 22:28:01.958261 containerd[1455]: time="2024-08-05T22:28:01.958088774Z" level=info msg="StartContainer for \"8dc49ee3511404a378d14b22521bcf28cac7b41dbc0be40015dd925273b18856\" returns successfully" Aug 5 22:28:02.761157 kubelet[2560]: I0805 22:28:02.760660 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8j924" podStartSLOduration=4.760600688 podCreationTimestamp="2024-08-05 22:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:27:59.75493189 +0000 UTC m=+14.171610154" watchObservedRunningTime="2024-08-05 22:28:02.760600688 +0000 UTC m=+17.177278942" Aug 5 22:28:02.761157 kubelet[2560]: I0805 22:28:02.761080 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-cv7vq" podStartSLOduration=1.659605408 podCreationTimestamp="2024-08-05 22:27:59 +0000 UTC" firstStartedPulling="2024-08-05 22:27:59.496831945 +0000 UTC m=+13.913510199" lastFinishedPulling="2024-08-05 22:28:01.598277552 +0000 UTC m=+16.014955806" observedRunningTime="2024-08-05 22:28:02.760128911 +0000 UTC m=+17.176807165" watchObservedRunningTime="2024-08-05 22:28:02.761051015 +0000 UTC m=+17.177729289" Aug 5 22:28:06.200338 kubelet[2560]: I0805 22:28:06.191452 2560 topology_manager.go:215] "Topology Admit Handler" podUID="ab68857c-b471-4bd0-b69a-d5a040d46f49" podNamespace="calico-system" podName="calico-typha-778566b5f9-q7jkg" Aug 5 22:28:06.242144 systemd[1]: Created slice kubepods-besteffort-podab68857c_b471_4bd0_b69a_d5a040d46f49.slice - libcontainer container kubepods-besteffort-podab68857c_b471_4bd0_b69a_d5a040d46f49.slice. Aug 5 22:28:06.321498 kubelet[2560]: I0805 22:28:06.321048 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848mc\" (UniqueName: \"kubernetes.io/projected/ab68857c-b471-4bd0-b69a-d5a040d46f49-kube-api-access-848mc\") pod \"calico-typha-778566b5f9-q7jkg\" (UID: \"ab68857c-b471-4bd0-b69a-d5a040d46f49\") " pod="calico-system/calico-typha-778566b5f9-q7jkg" Aug 5 22:28:06.321498 kubelet[2560]: I0805 22:28:06.321119 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab68857c-b471-4bd0-b69a-d5a040d46f49-tigera-ca-bundle\") pod \"calico-typha-778566b5f9-q7jkg\" (UID: \"ab68857c-b471-4bd0-b69a-d5a040d46f49\") " pod="calico-system/calico-typha-778566b5f9-q7jkg" Aug 5 22:28:06.321498 kubelet[2560]: I0805 22:28:06.321322 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ab68857c-b471-4bd0-b69a-d5a040d46f49-typha-certs\") pod \"calico-typha-778566b5f9-q7jkg\" (UID: \"ab68857c-b471-4bd0-b69a-d5a040d46f49\") " pod="calico-system/calico-typha-778566b5f9-q7jkg" Aug 5 22:28:06.539899 kubelet[2560]: I0805 22:28:06.539048 2560 topology_manager.go:215] "Topology Admit Handler" podUID="31fcd41d-c61d-49e3-9382-1a1975f360b4" podNamespace="calico-system" podName="calico-node-6f884" Aug 5 22:28:06.552858 systemd[1]: Created slice kubepods-besteffort-pod31fcd41d_c61d_49e3_9382_1a1975f360b4.slice - libcontainer container kubepods-besteffort-pod31fcd41d_c61d_49e3_9382_1a1975f360b4.slice. Aug 5 22:28:06.727980 kubelet[2560]: I0805 22:28:06.727893 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfftn\" (UniqueName: \"kubernetes.io/projected/31fcd41d-c61d-49e3-9382-1a1975f360b4-kube-api-access-sfftn\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.727980 kubelet[2560]: I0805 22:28:06.728001 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-policysync\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728253 kubelet[2560]: I0805 22:28:06.728105 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31fcd41d-c61d-49e3-9382-1a1975f360b4-tigera-ca-bundle\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728253 kubelet[2560]: I0805 22:28:06.728174 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-var-run-calico\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728337 kubelet[2560]: I0805 22:28:06.728256 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-var-lib-calico\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728337 kubelet[2560]: I0805 22:28:06.728292 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-lib-modules\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728484 kubelet[2560]: I0805 22:28:06.728414 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-xtables-lock\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728536 kubelet[2560]: I0805 22:28:06.728497 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-bin-dir\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728578 kubelet[2560]: I0805 22:28:06.728540 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-flexvol-driver-host\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728625 kubelet[2560]: I0805 22:28:06.728604 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31fcd41d-c61d-49e3-9382-1a1975f360b4-node-certs\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728665 kubelet[2560]: I0805 22:28:06.728640 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-net-dir\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.728744 kubelet[2560]: I0805 22:28:06.728706 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-log-dir\") pod \"calico-node-6f884\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " pod="calico-system/calico-node-6f884" Aug 5 22:28:06.789756 kubelet[2560]: I0805 22:28:06.789667 2560 topology_manager.go:215] "Topology Admit Handler" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" podNamespace="calico-system" podName="csi-node-driver-tp77v" Aug 5 22:28:06.790212 kubelet[2560]: E0805 22:28:06.790081 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:06.836370 kubelet[2560]: E0805 22:28:06.836331 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.836613 kubelet[2560]: W0805 22:28:06.836551 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.836613 kubelet[2560]: E0805 22:28:06.836593 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.840955 kubelet[2560]: E0805 22:28:06.840911 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.840955 kubelet[2560]: W0805 22:28:06.840950 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.841089 kubelet[2560]: E0805 22:28:06.840989 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.842215 kubelet[2560]: E0805 22:28:06.841364 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.842215 kubelet[2560]: W0805 22:28:06.841387 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.842215 kubelet[2560]: E0805 22:28:06.841403 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.842215 kubelet[2560]: E0805 22:28:06.842020 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.842215 kubelet[2560]: W0805 22:28:06.842035 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.842215 kubelet[2560]: E0805 22:28:06.842054 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.842607 kubelet[2560]: E0805 22:28:06.842410 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.842607 kubelet[2560]: W0805 22:28:06.842477 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.842607 kubelet[2560]: E0805 22:28:06.842498 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.842943 kubelet[2560]: E0805 22:28:06.842921 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.842995 kubelet[2560]: W0805 22:28:06.842968 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.842995 kubelet[2560]: E0805 22:28:06.842988 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.843371 kubelet[2560]: E0805 22:28:06.843312 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.843371 kubelet[2560]: W0805 22:28:06.843329 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.843371 kubelet[2560]: E0805 22:28:06.843346 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.843618 kubelet[2560]: E0805 22:28:06.843574 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.843618 kubelet[2560]: W0805 22:28:06.843590 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.843618 kubelet[2560]: E0805 22:28:06.843606 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.843890 kubelet[2560]: E0805 22:28:06.843866 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.843890 kubelet[2560]: W0805 22:28:06.843878 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.843890 kubelet[2560]: E0805 22:28:06.843893 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.845904 kubelet[2560]: E0805 22:28:06.845880 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.846061 kubelet[2560]: W0805 22:28:06.846001 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.846061 kubelet[2560]: E0805 22:28:06.846042 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.846741 kubelet[2560]: E0805 22:28:06.846495 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.846741 kubelet[2560]: W0805 22:28:06.846523 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.846741 kubelet[2560]: E0805 22:28:06.846557 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.847207 kubelet[2560]: E0805 22:28:06.847194 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.847368 kubelet[2560]: W0805 22:28:06.847330 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.847531 kubelet[2560]: E0805 22:28:06.847482 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.847994 kubelet[2560]: E0805 22:28:06.847954 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.847994 kubelet[2560]: W0805 22:28:06.847990 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.848125 kubelet[2560]: E0805 22:28:06.848022 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.848465 kubelet[2560]: E0805 22:28:06.848442 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.848465 kubelet[2560]: W0805 22:28:06.848460 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.848465 kubelet[2560]: E0805 22:28:06.848478 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.849081 kubelet[2560]: E0805 22:28:06.848925 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.849081 kubelet[2560]: W0805 22:28:06.848953 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.849081 kubelet[2560]: E0805 22:28:06.848974 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.849664 kubelet[2560]: E0805 22:28:06.849483 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.849664 kubelet[2560]: W0805 22:28:06.849594 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.849664 kubelet[2560]: E0805 22:28:06.849615 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.850335 kubelet[2560]: E0805 22:28:06.850117 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.850335 kubelet[2560]: W0805 22:28:06.850132 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.850335 kubelet[2560]: E0805 22:28:06.850147 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.851049 kubelet[2560]: E0805 22:28:06.850831 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.851049 kubelet[2560]: W0805 22:28:06.850845 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.851049 kubelet[2560]: E0805 22:28:06.850861 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.851462 kubelet[2560]: E0805 22:28:06.851337 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.851462 kubelet[2560]: W0805 22:28:06.851352 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.851462 kubelet[2560]: E0805 22:28:06.851368 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.851895 kubelet[2560]: E0805 22:28:06.851759 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.851895 kubelet[2560]: W0805 22:28:06.851773 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.851895 kubelet[2560]: E0805 22:28:06.851802 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.852384 kubelet[2560]: E0805 22:28:06.852259 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.852384 kubelet[2560]: W0805 22:28:06.852273 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.852384 kubelet[2560]: E0805 22:28:06.852289 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.865306 kubelet[2560]: E0805 22:28:06.865251 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.865306 kubelet[2560]: W0805 22:28:06.865286 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.865306 kubelet[2560]: E0805 22:28:06.865314 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.888614 kubelet[2560]: E0805 22:28:06.888518 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:06.901953 containerd[1455]: time="2024-08-05T22:28:06.901808146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-778566b5f9-q7jkg,Uid:ab68857c-b471-4bd0-b69a-d5a040d46f49,Namespace:calico-system,Attempt:0,}" Aug 5 22:28:06.931377 kubelet[2560]: E0805 22:28:06.931287 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.931377 kubelet[2560]: W0805 22:28:06.931330 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.931377 kubelet[2560]: E0805 22:28:06.931359 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.931651 kubelet[2560]: I0805 22:28:06.931416 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/31e0c4e3-71d6-44b3-8e8d-50979a20c140-varrun\") pod \"csi-node-driver-tp77v\" (UID: \"31e0c4e3-71d6-44b3-8e8d-50979a20c140\") " pod="calico-system/csi-node-driver-tp77v" Aug 5 22:28:06.931980 kubelet[2560]: E0805 22:28:06.931932 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.931980 kubelet[2560]: W0805 22:28:06.931968 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.932205 kubelet[2560]: E0805 22:28:06.932009 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.932205 kubelet[2560]: I0805 22:28:06.932061 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31e0c4e3-71d6-44b3-8e8d-50979a20c140-kubelet-dir\") pod \"csi-node-driver-tp77v\" (UID: \"31e0c4e3-71d6-44b3-8e8d-50979a20c140\") " pod="calico-system/csi-node-driver-tp77v" Aug 5 22:28:06.932572 kubelet[2560]: E0805 22:28:06.932547 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.932572 kubelet[2560]: W0805 22:28:06.932568 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.932656 kubelet[2560]: E0805 22:28:06.932597 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.932978 kubelet[2560]: E0805 22:28:06.932960 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.932978 kubelet[2560]: W0805 22:28:06.932976 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.933061 kubelet[2560]: E0805 22:28:06.933003 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.933313 kubelet[2560]: E0805 22:28:06.933293 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.933313 kubelet[2560]: W0805 22:28:06.933308 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.933421 kubelet[2560]: E0805 22:28:06.933334 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.933421 kubelet[2560]: I0805 22:28:06.933360 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/31e0c4e3-71d6-44b3-8e8d-50979a20c140-socket-dir\") pod \"csi-node-driver-tp77v\" (UID: \"31e0c4e3-71d6-44b3-8e8d-50979a20c140\") " pod="calico-system/csi-node-driver-tp77v" Aug 5 22:28:06.933911 kubelet[2560]: E0805 22:28:06.933889 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.933911 kubelet[2560]: W0805 22:28:06.933909 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.934078 kubelet[2560]: E0805 22:28:06.933983 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.934078 kubelet[2560]: I0805 22:28:06.934024 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84qx\" (UniqueName: \"kubernetes.io/projected/31e0c4e3-71d6-44b3-8e8d-50979a20c140-kube-api-access-r84qx\") pod \"csi-node-driver-tp77v\" (UID: \"31e0c4e3-71d6-44b3-8e8d-50979a20c140\") " pod="calico-system/csi-node-driver-tp77v" Aug 5 22:28:06.935476 kubelet[2560]: E0805 22:28:06.935445 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.935476 kubelet[2560]: W0805 22:28:06.935464 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.935476 kubelet[2560]: E0805 22:28:06.935534 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.936068 kubelet[2560]: E0805 22:28:06.935818 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.936068 kubelet[2560]: W0805 22:28:06.935829 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.936068 kubelet[2560]: E0805 22:28:06.935892 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.936494 kubelet[2560]: E0805 22:28:06.936264 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.936494 kubelet[2560]: W0805 22:28:06.936276 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.936494 kubelet[2560]: E0805 22:28:06.936460 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.936494 kubelet[2560]: I0805 22:28:06.936495 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/31e0c4e3-71d6-44b3-8e8d-50979a20c140-registration-dir\") pod \"csi-node-driver-tp77v\" (UID: \"31e0c4e3-71d6-44b3-8e8d-50979a20c140\") " pod="calico-system/csi-node-driver-tp77v" Aug 5 22:28:06.936872 kubelet[2560]: E0805 22:28:06.936816 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.936872 kubelet[2560]: W0805 22:28:06.936829 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.936983 kubelet[2560]: E0805 22:28:06.936965 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.937317 kubelet[2560]: E0805 22:28:06.937238 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.937317 kubelet[2560]: W0805 22:28:06.937261 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.937317 kubelet[2560]: E0805 22:28:06.937278 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.937825 kubelet[2560]: E0805 22:28:06.937792 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.937825 kubelet[2560]: W0805 22:28:06.937810 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.937917 kubelet[2560]: E0805 22:28:06.937833 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.938123 kubelet[2560]: E0805 22:28:06.938093 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.938123 kubelet[2560]: W0805 22:28:06.938107 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.938200 kubelet[2560]: E0805 22:28:06.938127 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.938464 kubelet[2560]: E0805 22:28:06.938438 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.938464 kubelet[2560]: W0805 22:28:06.938451 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.938464 kubelet[2560]: E0805 22:28:06.938465 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:06.938840 kubelet[2560]: E0805 22:28:06.938801 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:06.938891 kubelet[2560]: W0805 22:28:06.938838 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:06.938936 kubelet[2560]: E0805 22:28:06.938893 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.038881 kubelet[2560]: E0805 22:28:07.038836 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.038881 kubelet[2560]: W0805 22:28:07.038872 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.039100 kubelet[2560]: E0805 22:28:07.038901 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.039673 kubelet[2560]: E0805 22:28:07.039650 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.039673 kubelet[2560]: W0805 22:28:07.039670 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.039746 kubelet[2560]: E0805 22:28:07.039708 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.139903 kubelet[2560]: E0805 22:28:07.040149 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.139903 kubelet[2560]: W0805 22:28:07.040159 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.139903 kubelet[2560]: E0805 22:28:07.040176 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.139903 kubelet[2560]: E0805 22:28:07.040525 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.139903 kubelet[2560]: W0805 22:28:07.040536 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.139903 kubelet[2560]: E0805 22:28:07.040586 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.139903 kubelet[2560]: E0805 22:28:07.041017 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.139903 kubelet[2560]: W0805 22:28:07.041028 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.139903 kubelet[2560]: E0805 22:28:07.041093 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.139903 kubelet[2560]: E0805 22:28:07.041737 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140273 kubelet[2560]: W0805 22:28:07.041749 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140273 kubelet[2560]: E0805 22:28:07.041807 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140273 kubelet[2560]: E0805 22:28:07.045071 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140273 kubelet[2560]: W0805 22:28:07.045097 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140273 kubelet[2560]: E0805 22:28:07.045175 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140273 kubelet[2560]: E0805 22:28:07.045473 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140273 kubelet[2560]: W0805 22:28:07.045483 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140273 kubelet[2560]: E0805 22:28:07.045546 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140273 kubelet[2560]: E0805 22:28:07.045802 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140273 kubelet[2560]: W0805 22:28:07.045812 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140521 kubelet[2560]: E0805 22:28:07.045897 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140521 kubelet[2560]: E0805 22:28:07.046010 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140521 kubelet[2560]: W0805 22:28:07.046019 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140521 kubelet[2560]: E0805 22:28:07.046117 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140521 kubelet[2560]: E0805 22:28:07.046292 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140521 kubelet[2560]: W0805 22:28:07.046303 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140521 kubelet[2560]: E0805 22:28:07.046413 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140521 kubelet[2560]: E0805 22:28:07.046548 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140521 kubelet[2560]: W0805 22:28:07.046557 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140521 kubelet[2560]: E0805 22:28:07.046713 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140521 kubelet[2560]: E0805 22:28:07.046830 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140798 kubelet[2560]: W0805 22:28:07.046840 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140798 kubelet[2560]: E0805 22:28:07.046865 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140798 kubelet[2560]: E0805 22:28:07.047150 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140798 kubelet[2560]: W0805 22:28:07.047165 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140798 kubelet[2560]: E0805 22:28:07.047237 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140798 kubelet[2560]: E0805 22:28:07.047479 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140798 kubelet[2560]: W0805 22:28:07.047489 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.140798 kubelet[2560]: E0805 22:28:07.047530 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.140798 kubelet[2560]: E0805 22:28:07.047760 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.140798 kubelet[2560]: W0805 22:28:07.047771 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141112 kubelet[2560]: E0805 22:28:07.047814 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.141112 kubelet[2560]: E0805 22:28:07.048008 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.141112 kubelet[2560]: W0805 22:28:07.048018 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141112 kubelet[2560]: E0805 22:28:07.048061 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.141112 kubelet[2560]: E0805 22:28:07.048272 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.141112 kubelet[2560]: W0805 22:28:07.048282 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141112 kubelet[2560]: E0805 22:28:07.048329 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.141112 kubelet[2560]: E0805 22:28:07.048508 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.141112 kubelet[2560]: W0805 22:28:07.048522 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141112 kubelet[2560]: E0805 22:28:07.048557 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.141112 kubelet[2560]: E0805 22:28:07.048821 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.141464 kubelet[2560]: W0805 22:28:07.048832 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141464 kubelet[2560]: E0805 22:28:07.048867 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.141464 kubelet[2560]: E0805 22:28:07.049208 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.141464 kubelet[2560]: W0805 22:28:07.049229 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141464 kubelet[2560]: E0805 22:28:07.049272 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.141464 kubelet[2560]: E0805 22:28:07.049559 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.141464 kubelet[2560]: W0805 22:28:07.049570 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141464 kubelet[2560]: E0805 22:28:07.049625 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.141464 kubelet[2560]: E0805 22:28:07.049839 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.141464 kubelet[2560]: W0805 22:28:07.049848 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141804 kubelet[2560]: E0805 22:28:07.049918 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.141804 kubelet[2560]: E0805 22:28:07.051096 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.141804 kubelet[2560]: W0805 22:28:07.051110 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141804 kubelet[2560]: E0805 22:28:07.051125 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.141804 kubelet[2560]: E0805 22:28:07.140253 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.141804 kubelet[2560]: W0805 22:28:07.140277 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.141804 kubelet[2560]: E0805 22:28:07.140307 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.147172 kubelet[2560]: E0805 22:28:07.147139 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.147172 kubelet[2560]: W0805 22:28:07.147168 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.147342 kubelet[2560]: E0805 22:28:07.147204 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.151082 kubelet[2560]: E0805 22:28:07.151029 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:07.151082 kubelet[2560]: W0805 22:28:07.151057 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:07.151229 kubelet[2560]: E0805 22:28:07.151089 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:07.155426 kubelet[2560]: E0805 22:28:07.155382 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:07.156058 containerd[1455]: time="2024-08-05T22:28:07.156003472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6f884,Uid:31fcd41d-c61d-49e3-9382-1a1975f360b4,Namespace:calico-system,Attempt:0,}" Aug 5 22:28:07.218560 containerd[1455]: time="2024-08-05T22:28:07.216415359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:28:07.218560 containerd[1455]: time="2024-08-05T22:28:07.216516298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:07.218560 containerd[1455]: time="2024-08-05T22:28:07.216542327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:28:07.218560 containerd[1455]: time="2024-08-05T22:28:07.216560551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:07.239273 containerd[1455]: time="2024-08-05T22:28:07.238621513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:28:07.239273 containerd[1455]: time="2024-08-05T22:28:07.238720149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:07.239273 containerd[1455]: time="2024-08-05T22:28:07.238768991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:28:07.239273 containerd[1455]: time="2024-08-05T22:28:07.238791974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:07.251227 systemd[1]: Started cri-containerd-b27a5ea02b1f18d56a89f56ddecf6f38a1c05d0ebba28e3278d707df7ed25b29.scope - libcontainer container b27a5ea02b1f18d56a89f56ddecf6f38a1c05d0ebba28e3278d707df7ed25b29. Aug 5 22:28:07.275184 systemd[1]: Started cri-containerd-4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823.scope - libcontainer container 4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823. Aug 5 22:28:07.318643 containerd[1455]: time="2024-08-05T22:28:07.318580609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6f884,Uid:31fcd41d-c61d-49e3-9382-1a1975f360b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\"" Aug 5 22:28:07.321926 containerd[1455]: time="2024-08-05T22:28:07.321884547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-778566b5f9-q7jkg,Uid:ab68857c-b471-4bd0-b69a-d5a040d46f49,Namespace:calico-system,Attempt:0,} returns sandbox id \"b27a5ea02b1f18d56a89f56ddecf6f38a1c05d0ebba28e3278d707df7ed25b29\"" Aug 5 22:28:07.332424 kubelet[2560]: E0805 22:28:07.332391 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:07.333461 kubelet[2560]: E0805 22:28:07.333401 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:07.334207 containerd[1455]: time="2024-08-05T22:28:07.334166488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:28:08.700230 kubelet[2560]: E0805 22:28:08.700152 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:10.564433 containerd[1455]: time="2024-08-05T22:28:10.564345886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:10.565520 containerd[1455]: time="2024-08-05T22:28:10.565391439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:28:10.567938 containerd[1455]: time="2024-08-05T22:28:10.567889591Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:10.570240 containerd[1455]: time="2024-08-05T22:28:10.570177818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:10.571480 containerd[1455]: time="2024-08-05T22:28:10.571022564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.236809128s" Aug 5 22:28:10.571480 containerd[1455]: time="2024-08-05T22:28:10.571077217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:28:10.573635 containerd[1455]: time="2024-08-05T22:28:10.573309419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:28:10.585738 containerd[1455]: time="2024-08-05T22:28:10.585442385Z" level=info msg="CreateContainer within sandbox \"b27a5ea02b1f18d56a89f56ddecf6f38a1c05d0ebba28e3278d707df7ed25b29\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:28:10.615046 containerd[1455]: time="2024-08-05T22:28:10.614965859Z" level=info msg="CreateContainer within sandbox \"b27a5ea02b1f18d56a89f56ddecf6f38a1c05d0ebba28e3278d707df7ed25b29\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4ee33b1b80ed6d7777281b8252c03ac617475862722da19badcecd799504d967\"" Aug 5 22:28:10.615800 containerd[1455]: time="2024-08-05T22:28:10.615707192Z" level=info msg="StartContainer for \"4ee33b1b80ed6d7777281b8252c03ac617475862722da19badcecd799504d967\"" Aug 5 22:28:10.653943 systemd[1]: Started cri-containerd-4ee33b1b80ed6d7777281b8252c03ac617475862722da19badcecd799504d967.scope - libcontainer container 4ee33b1b80ed6d7777281b8252c03ac617475862722da19badcecd799504d967. Aug 5 22:28:10.700285 kubelet[2560]: E0805 22:28:10.700233 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:10.769066 containerd[1455]: time="2024-08-05T22:28:10.768874236Z" level=info msg="StartContainer for \"4ee33b1b80ed6d7777281b8252c03ac617475862722da19badcecd799504d967\" returns successfully" Aug 5 22:28:10.788195 kubelet[2560]: E0805 22:28:10.788099 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:10.808381 kubelet[2560]: I0805 22:28:10.808320 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-778566b5f9-q7jkg" podStartSLOduration=1.5699392049999998 podCreationTimestamp="2024-08-05 22:28:06 +0000 UTC" firstStartedPulling="2024-08-05 22:28:07.333769773 +0000 UTC m=+21.750448028" lastFinishedPulling="2024-08-05 22:28:10.571974653 +0000 UTC m=+24.988652907" observedRunningTime="2024-08-05 22:28:10.807394817 +0000 UTC m=+25.224073081" watchObservedRunningTime="2024-08-05 22:28:10.808144084 +0000 UTC m=+25.224822338" Aug 5 22:28:10.883265 kubelet[2560]: E0805 22:28:10.883100 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.883265 kubelet[2560]: W0805 22:28:10.883136 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.883265 kubelet[2560]: E0805 22:28:10.883168 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.883501 kubelet[2560]: E0805 22:28:10.883449 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.883501 kubelet[2560]: W0805 22:28:10.883460 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.883501 kubelet[2560]: E0805 22:28:10.883474 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.883727 kubelet[2560]: E0805 22:28:10.883713 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.883727 kubelet[2560]: W0805 22:28:10.883727 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.883809 kubelet[2560]: E0805 22:28:10.883738 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.884037 kubelet[2560]: E0805 22:28:10.883974 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.884037 kubelet[2560]: W0805 22:28:10.884004 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.884037 kubelet[2560]: E0805 22:28:10.884020 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.884362 kubelet[2560]: E0805 22:28:10.884346 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.884362 kubelet[2560]: W0805 22:28:10.884358 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.884434 kubelet[2560]: E0805 22:28:10.884370 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.884629 kubelet[2560]: E0805 22:28:10.884607 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.884629 kubelet[2560]: W0805 22:28:10.884619 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.884737 kubelet[2560]: E0805 22:28:10.884635 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.884938 kubelet[2560]: E0805 22:28:10.884923 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.884938 kubelet[2560]: W0805 22:28:10.884935 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.884938 kubelet[2560]: E0805 22:28:10.884948 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.885266 kubelet[2560]: E0805 22:28:10.885242 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.885266 kubelet[2560]: W0805 22:28:10.885255 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.885338 kubelet[2560]: E0805 22:28:10.885270 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.885650 kubelet[2560]: E0805 22:28:10.885635 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.885650 kubelet[2560]: W0805 22:28:10.885647 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.885746 kubelet[2560]: E0805 22:28:10.885658 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.886151 kubelet[2560]: E0805 22:28:10.886098 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.886151 kubelet[2560]: W0805 22:28:10.886142 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.886263 kubelet[2560]: E0805 22:28:10.886180 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.886985 kubelet[2560]: E0805 22:28:10.886952 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.886985 kubelet[2560]: W0805 22:28:10.886973 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.886985 kubelet[2560]: E0805 22:28:10.886985 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.887524 kubelet[2560]: E0805 22:28:10.887285 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.887524 kubelet[2560]: W0805 22:28:10.887301 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.887524 kubelet[2560]: E0805 22:28:10.887313 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.888010 kubelet[2560]: E0805 22:28:10.887878 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.888010 kubelet[2560]: W0805 22:28:10.887893 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.888010 kubelet[2560]: E0805 22:28:10.887906 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.889443 kubelet[2560]: E0805 22:28:10.888215 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.889443 kubelet[2560]: W0805 22:28:10.888238 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.889443 kubelet[2560]: E0805 22:28:10.888261 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.889443 kubelet[2560]: E0805 22:28:10.888558 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.889443 kubelet[2560]: W0805 22:28:10.888571 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.889443 kubelet[2560]: E0805 22:28:10.888585 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.977730 kubelet[2560]: E0805 22:28:10.977649 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.977730 kubelet[2560]: W0805 22:28:10.977701 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.977730 kubelet[2560]: E0805 22:28:10.977739 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.978391 kubelet[2560]: E0805 22:28:10.978347 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.978391 kubelet[2560]: W0805 22:28:10.978378 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.978391 kubelet[2560]: E0805 22:28:10.978417 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.980041 kubelet[2560]: E0805 22:28:10.979765 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.980041 kubelet[2560]: W0805 22:28:10.979782 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.980041 kubelet[2560]: E0805 22:28:10.979808 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.980660 kubelet[2560]: E0805 22:28:10.980423 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.980660 kubelet[2560]: W0805 22:28:10.980435 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.980660 kubelet[2560]: E0805 22:28:10.980535 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.980962 kubelet[2560]: E0805 22:28:10.980926 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.981027 kubelet[2560]: W0805 22:28:10.980966 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.981154 kubelet[2560]: E0805 22:28:10.981092 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.981436 kubelet[2560]: E0805 22:28:10.981397 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.981436 kubelet[2560]: W0805 22:28:10.981415 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.981596 kubelet[2560]: E0805 22:28:10.981548 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.981649 kubelet[2560]: E0805 22:28:10.981637 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.981754 kubelet[2560]: W0805 22:28:10.981650 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.981754 kubelet[2560]: E0805 22:28:10.981672 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.982593 kubelet[2560]: E0805 22:28:10.981955 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.982593 kubelet[2560]: W0805 22:28:10.981966 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.982593 kubelet[2560]: E0805 22:28:10.981989 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.982593 kubelet[2560]: E0805 22:28:10.982229 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.982593 kubelet[2560]: W0805 22:28:10.982239 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.982593 kubelet[2560]: E0805 22:28:10.982260 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.982593 kubelet[2560]: E0805 22:28:10.982496 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.982593 kubelet[2560]: W0805 22:28:10.982528 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.982593 kubelet[2560]: E0805 22:28:10.982552 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.983200 kubelet[2560]: E0805 22:28:10.983180 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.983200 kubelet[2560]: W0805 22:28:10.983196 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.983288 kubelet[2560]: E0805 22:28:10.983227 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.983505 kubelet[2560]: E0805 22:28:10.983476 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.983505 kubelet[2560]: W0805 22:28:10.983505 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.983581 kubelet[2560]: E0805 22:28:10.983557 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.983781 kubelet[2560]: E0805 22:28:10.983766 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.983781 kubelet[2560]: W0805 22:28:10.983779 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.983874 kubelet[2560]: E0805 22:28:10.983826 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.984044 kubelet[2560]: E0805 22:28:10.984029 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.984093 kubelet[2560]: W0805 22:28:10.984058 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.984093 kubelet[2560]: E0805 22:28:10.984080 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.984399 kubelet[2560]: E0805 22:28:10.984380 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.984399 kubelet[2560]: W0805 22:28:10.984393 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.984481 kubelet[2560]: E0805 22:28:10.984412 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.984795 kubelet[2560]: E0805 22:28:10.984773 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.984795 kubelet[2560]: W0805 22:28:10.984790 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.984875 kubelet[2560]: E0805 22:28:10.984813 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.985201 kubelet[2560]: E0805 22:28:10.985178 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.985201 kubelet[2560]: W0805 22:28:10.985195 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.986298 kubelet[2560]: E0805 22:28:10.985218 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:10.986298 kubelet[2560]: E0805 22:28:10.985502 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:10.986298 kubelet[2560]: W0805 22:28:10.985546 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:10.986298 kubelet[2560]: E0805 22:28:10.985562 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.794325 kubelet[2560]: I0805 22:28:11.793594 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:28:11.794939 kubelet[2560]: E0805 22:28:11.794419 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:11.796380 kubelet[2560]: E0805 22:28:11.796347 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.796380 kubelet[2560]: W0805 22:28:11.796371 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.796476 kubelet[2560]: E0805 22:28:11.796399 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.797717 kubelet[2560]: E0805 22:28:11.797699 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.797717 kubelet[2560]: W0805 22:28:11.797714 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.797810 kubelet[2560]: E0805 22:28:11.797734 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.799599 kubelet[2560]: E0805 22:28:11.798455 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.799599 kubelet[2560]: W0805 22:28:11.798470 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.799599 kubelet[2560]: E0805 22:28:11.798483 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.799599 kubelet[2560]: E0805 22:28:11.798734 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.799599 kubelet[2560]: W0805 22:28:11.798744 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.799599 kubelet[2560]: E0805 22:28:11.798756 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.800253 kubelet[2560]: E0805 22:28:11.800225 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.800253 kubelet[2560]: W0805 22:28:11.800242 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.800253 kubelet[2560]: E0805 22:28:11.800259 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.800523 kubelet[2560]: E0805 22:28:11.800500 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.800523 kubelet[2560]: W0805 22:28:11.800515 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.800523 kubelet[2560]: E0805 22:28:11.800527 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.801044 kubelet[2560]: E0805 22:28:11.800858 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.801044 kubelet[2560]: W0805 22:28:11.800870 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.801044 kubelet[2560]: E0805 22:28:11.800883 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.801459 kubelet[2560]: E0805 22:28:11.801235 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.801459 kubelet[2560]: W0805 22:28:11.801249 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.801459 kubelet[2560]: E0805 22:28:11.801263 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.801972 kubelet[2560]: E0805 22:28:11.801942 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.801972 kubelet[2560]: W0805 22:28:11.801960 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.801972 kubelet[2560]: E0805 22:28:11.801973 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.802336 kubelet[2560]: E0805 22:28:11.802223 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.802336 kubelet[2560]: W0805 22:28:11.802232 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.802336 kubelet[2560]: E0805 22:28:11.802245 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.802443 kubelet[2560]: E0805 22:28:11.802437 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.802490 kubelet[2560]: W0805 22:28:11.802446 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.802490 kubelet[2560]: E0805 22:28:11.802457 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.802711 kubelet[2560]: E0805 22:28:11.802644 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.802711 kubelet[2560]: W0805 22:28:11.802661 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.802711 kubelet[2560]: E0805 22:28:11.802673 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.803313 kubelet[2560]: E0805 22:28:11.803128 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.803313 kubelet[2560]: W0805 22:28:11.803161 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.803313 kubelet[2560]: E0805 22:28:11.803202 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.803917 kubelet[2560]: E0805 22:28:11.803670 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.803917 kubelet[2560]: W0805 22:28:11.803709 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.803917 kubelet[2560]: E0805 22:28:11.803725 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.804702 kubelet[2560]: E0805 22:28:11.804634 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.804761 kubelet[2560]: W0805 22:28:11.804720 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.804798 kubelet[2560]: E0805 22:28:11.804760 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.889131 kubelet[2560]: E0805 22:28:11.889039 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.889131 kubelet[2560]: W0805 22:28:11.889075 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.889131 kubelet[2560]: E0805 22:28:11.889142 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.889997 kubelet[2560]: E0805 22:28:11.889531 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.889997 kubelet[2560]: W0805 22:28:11.889541 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.889997 kubelet[2560]: E0805 22:28:11.889569 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.889997 kubelet[2560]: E0805 22:28:11.889862 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.889997 kubelet[2560]: W0805 22:28:11.889871 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.889997 kubelet[2560]: E0805 22:28:11.889898 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.890298 kubelet[2560]: E0805 22:28:11.890148 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.890298 kubelet[2560]: W0805 22:28:11.890160 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.890298 kubelet[2560]: E0805 22:28:11.890187 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.890425 kubelet[2560]: E0805 22:28:11.890399 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.890425 kubelet[2560]: W0805 22:28:11.890413 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.890425 kubelet[2560]: E0805 22:28:11.890427 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.891645 kubelet[2560]: E0805 22:28:11.891129 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.891645 kubelet[2560]: W0805 22:28:11.891145 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.891645 kubelet[2560]: E0805 22:28:11.891283 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.891645 kubelet[2560]: E0805 22:28:11.891494 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.891645 kubelet[2560]: W0805 22:28:11.891508 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.891645 kubelet[2560]: E0805 22:28:11.891623 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.891878 kubelet[2560]: E0805 22:28:11.891817 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.891878 kubelet[2560]: W0805 22:28:11.891828 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.891975 kubelet[2560]: E0805 22:28:11.891952 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.892233 kubelet[2560]: E0805 22:28:11.892212 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.892233 kubelet[2560]: W0805 22:28:11.892227 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.892321 kubelet[2560]: E0805 22:28:11.892249 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.892650 kubelet[2560]: E0805 22:28:11.892623 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.892650 kubelet[2560]: W0805 22:28:11.892639 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.892753 kubelet[2560]: E0805 22:28:11.892659 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.893129 kubelet[2560]: E0805 22:28:11.892936 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.893129 kubelet[2560]: W0805 22:28:11.893128 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.893370 kubelet[2560]: E0805 22:28:11.893167 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.893634 kubelet[2560]: E0805 22:28:11.893600 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.893634 kubelet[2560]: W0805 22:28:11.893622 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.894212 kubelet[2560]: E0805 22:28:11.893834 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.894212 kubelet[2560]: E0805 22:28:11.893864 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.894212 kubelet[2560]: W0805 22:28:11.893875 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.894212 kubelet[2560]: E0805 22:28:11.894030 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.894799 kubelet[2560]: E0805 22:28:11.894547 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.894799 kubelet[2560]: W0805 22:28:11.894564 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.894799 kubelet[2560]: E0805 22:28:11.894585 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.895492 kubelet[2560]: E0805 22:28:11.895247 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.895492 kubelet[2560]: W0805 22:28:11.895262 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.895492 kubelet[2560]: E0805 22:28:11.895286 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.895632 kubelet[2560]: E0805 22:28:11.895598 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.895632 kubelet[2560]: W0805 22:28:11.895609 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.895632 kubelet[2560]: E0805 22:28:11.895624 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.896107 kubelet[2560]: E0805 22:28:11.896056 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.896107 kubelet[2560]: W0805 22:28:11.896079 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.896425 kubelet[2560]: E0805 22:28:11.896170 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:11.896425 kubelet[2560]: E0805 22:28:11.896333 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:28:11.896425 kubelet[2560]: W0805 22:28:11.896344 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:28:11.896425 kubelet[2560]: E0805 22:28:11.896358 2560 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:28:12.196024 containerd[1455]: time="2024-08-05T22:28:12.195948896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:12.199359 containerd[1455]: time="2024-08-05T22:28:12.199246116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:28:12.201358 containerd[1455]: time="2024-08-05T22:28:12.201271419Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:12.204953 containerd[1455]: time="2024-08-05T22:28:12.204859556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:12.206387 containerd[1455]: time="2024-08-05T22:28:12.205492415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.632138122s" Aug 5 22:28:12.206387 containerd[1455]: time="2024-08-05T22:28:12.205560793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:28:12.209413 containerd[1455]: time="2024-08-05T22:28:12.209337044Z" level=info msg="CreateContainer within sandbox \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:28:12.244822 containerd[1455]: time="2024-08-05T22:28:12.244411803Z" level=info msg="CreateContainer within sandbox \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\"" Aug 5 22:28:12.245173 containerd[1455]: time="2024-08-05T22:28:12.245130612Z" level=info msg="StartContainer for \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\"" Aug 5 22:28:12.288058 systemd[1]: Started cri-containerd-47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89.scope - libcontainer container 47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89. Aug 5 22:28:12.360441 systemd[1]: cri-containerd-47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89.scope: Deactivated successfully. Aug 5 22:28:12.455000 containerd[1455]: time="2024-08-05T22:28:12.454671629Z" level=info msg="StartContainer for \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\" returns successfully" Aug 5 22:28:12.488859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89-rootfs.mount: Deactivated successfully. Aug 5 22:28:12.702778 kubelet[2560]: E0805 22:28:12.701957 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:12.808998 containerd[1455]: time="2024-08-05T22:28:12.806601665Z" level=info msg="StopContainer for \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\" with timeout 5 (s)" Aug 5 22:28:13.031376 containerd[1455]: time="2024-08-05T22:28:13.031212263Z" level=info msg="Stop container \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\" with signal terminated" Aug 5 22:28:13.032342 containerd[1455]: time="2024-08-05T22:28:13.032187554Z" level=info msg="shim disconnected" id=47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89 namespace=k8s.io Aug 5 22:28:13.032342 containerd[1455]: time="2024-08-05T22:28:13.032273555Z" level=warning msg="cleaning up after shim disconnected" id=47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89 namespace=k8s.io Aug 5 22:28:13.032342 containerd[1455]: time="2024-08-05T22:28:13.032292591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:28:13.072462 containerd[1455]: time="2024-08-05T22:28:13.071490967Z" level=info msg="StopContainer for \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\" returns successfully" Aug 5 22:28:13.072462 containerd[1455]: time="2024-08-05T22:28:13.072503307Z" level=info msg="StopPodSandbox for \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\"" Aug 5 22:28:13.072960 containerd[1455]: time="2024-08-05T22:28:13.072540026Z" level=info msg="Container to stop \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:28:13.088633 systemd[1]: cri-containerd-4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823.scope: Deactivated successfully. Aug 5 22:28:13.145016 containerd[1455]: time="2024-08-05T22:28:13.144927926Z" level=info msg="shim disconnected" id=4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823 namespace=k8s.io Aug 5 22:28:13.145016 containerd[1455]: time="2024-08-05T22:28:13.145006694Z" level=warning msg="cleaning up after shim disconnected" id=4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823 namespace=k8s.io Aug 5 22:28:13.145016 containerd[1455]: time="2024-08-05T22:28:13.145018776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:28:13.166671 containerd[1455]: time="2024-08-05T22:28:13.166578516Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:28:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:28:13.168839 containerd[1455]: time="2024-08-05T22:28:13.168648652Z" level=info msg="TearDown network for sandbox \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\" successfully" Aug 5 22:28:13.168839 containerd[1455]: time="2024-08-05T22:28:13.168706932Z" level=info msg="StopPodSandbox for \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\" returns successfully" Aug 5 22:28:13.232513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823-rootfs.mount: Deactivated successfully. Aug 5 22:28:13.232698 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823-shm.mount: Deactivated successfully. Aug 5 22:28:13.307446 kubelet[2560]: I0805 22:28:13.306981 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-var-run-calico\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.307446 kubelet[2560]: I0805 22:28:13.307045 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-xtables-lock\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.307446 kubelet[2560]: I0805 22:28:13.307101 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31fcd41d-c61d-49e3-9382-1a1975f360b4-node-certs\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.307446 kubelet[2560]: I0805 22:28:13.307130 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-flexvol-driver-host\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.307446 kubelet[2560]: I0805 22:28:13.307163 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-bin-dir\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.307446 kubelet[2560]: I0805 22:28:13.307195 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfftn\" (UniqueName: \"kubernetes.io/projected/31fcd41d-c61d-49e3-9382-1a1975f360b4-kube-api-access-sfftn\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.308130 kubelet[2560]: I0805 22:28:13.307217 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:28:13.308130 kubelet[2560]: I0805 22:28:13.307233 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-policysync\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.308130 kubelet[2560]: I0805 22:28:13.307460 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-var-lib-calico\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.308130 kubelet[2560]: I0805 22:28:13.307490 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-log-dir\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.308130 kubelet[2560]: I0805 22:28:13.307517 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-net-dir\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.308130 kubelet[2560]: I0805 22:28:13.307542 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-lib-modules\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.308387 kubelet[2560]: I0805 22:28:13.307574 2560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31fcd41d-c61d-49e3-9382-1a1975f360b4-tigera-ca-bundle\") pod \"31fcd41d-c61d-49e3-9382-1a1975f360b4\" (UID: \"31fcd41d-c61d-49e3-9382-1a1975f360b4\") " Aug 5 22:28:13.308387 kubelet[2560]: I0805 22:28:13.307638 2560 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.308387 kubelet[2560]: I0805 22:28:13.307700 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:28:13.308387 kubelet[2560]: I0805 22:28:13.307739 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:28:13.308387 kubelet[2560]: I0805 22:28:13.307280 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-policysync" (OuterVolumeSpecName: "policysync") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:28:13.308596 kubelet[2560]: I0805 22:28:13.307301 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:28:13.308596 kubelet[2560]: I0805 22:28:13.307326 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:28:13.308596 kubelet[2560]: I0805 22:28:13.307793 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:28:13.308596 kubelet[2560]: I0805 22:28:13.307821 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:28:13.308596 kubelet[2560]: I0805 22:28:13.307846 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:28:13.308822 kubelet[2560]: I0805 22:28:13.308279 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31fcd41d-c61d-49e3-9382-1a1975f360b4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:28:13.312737 kubelet[2560]: I0805 22:28:13.312648 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fcd41d-c61d-49e3-9382-1a1975f360b4-kube-api-access-sfftn" (OuterVolumeSpecName: "kube-api-access-sfftn") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "kube-api-access-sfftn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:28:13.312976 kubelet[2560]: I0805 22:28:13.312935 2560 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fcd41d-c61d-49e3-9382-1a1975f360b4-node-certs" (OuterVolumeSpecName: "node-certs") pod "31fcd41d-c61d-49e3-9382-1a1975f360b4" (UID: "31fcd41d-c61d-49e3-9382-1a1975f360b4"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:28:13.314651 systemd[1]: var-lib-kubelet-pods-31fcd41d\x2dc61d\x2d49e3\x2d9382\x2d1a1975f360b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsfftn.mount: Deactivated successfully. Aug 5 22:28:13.314887 systemd[1]: var-lib-kubelet-pods-31fcd41d\x2dc61d\x2d49e3\x2d9382\x2d1a1975f360b4-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Aug 5 22:28:13.408404 kubelet[2560]: I0805 22:28:13.408240 2560 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408404 kubelet[2560]: I0805 22:28:13.408301 2560 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31fcd41d-c61d-49e3-9382-1a1975f360b4-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408404 kubelet[2560]: I0805 22:28:13.408317 2560 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-var-run-calico\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408404 kubelet[2560]: I0805 22:28:13.408352 2560 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31fcd41d-c61d-49e3-9382-1a1975f360b4-node-certs\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408404 kubelet[2560]: I0805 22:28:13.408367 2560 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408404 kubelet[2560]: I0805 22:28:13.408381 2560 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408404 kubelet[2560]: I0805 22:28:13.408399 2560 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sfftn\" (UniqueName: \"kubernetes.io/projected/31fcd41d-c61d-49e3-9382-1a1975f360b4-kube-api-access-sfftn\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408404 kubelet[2560]: I0805 22:28:13.408413 2560 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-policysync\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408975 kubelet[2560]: I0805 22:28:13.408427 2560 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408975 kubelet[2560]: I0805 22:28:13.408444 2560 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.408975 kubelet[2560]: I0805 22:28:13.408459 2560 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31fcd41d-c61d-49e3-9382-1a1975f360b4-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Aug 5 22:28:13.712551 systemd[1]: Removed slice kubepods-besteffort-pod31fcd41d_c61d_49e3_9382_1a1975f360b4.slice - libcontainer container kubepods-besteffort-pod31fcd41d_c61d_49e3_9382_1a1975f360b4.slice. Aug 5 22:28:13.821170 kubelet[2560]: I0805 22:28:13.821093 2560 scope.go:117] "RemoveContainer" containerID="47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89" Aug 5 22:28:13.823354 containerd[1455]: time="2024-08-05T22:28:13.823293119Z" level=info msg="RemoveContainer for \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\"" Aug 5 22:28:13.921896 containerd[1455]: time="2024-08-05T22:28:13.921820335Z" level=info msg="RemoveContainer for \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\" returns successfully" Aug 5 22:28:13.922615 kubelet[2560]: I0805 22:28:13.922522 2560 scope.go:117] "RemoveContainer" containerID="47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89" Aug 5 22:28:13.923090 containerd[1455]: time="2024-08-05T22:28:13.922958222Z" level=error msg="ContainerStatus for \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\": not found" Aug 5 22:28:13.923410 kubelet[2560]: E0805 22:28:13.923282 2560 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\": not found" containerID="47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89" Aug 5 22:28:13.923410 kubelet[2560]: I0805 22:28:13.923351 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89"} err="failed to get container status \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\": rpc error: code = NotFound desc = an error occurred when try to find container \"47c4d92b8b66dca84818a242234b9fb3647b1e34cd50a61c870f66fe19331d89\": not found" Aug 5 22:28:13.989768 kubelet[2560]: I0805 22:28:13.989536 2560 topology_manager.go:215] "Topology Admit Handler" podUID="dea95c71-e6d6-49c4-89c3-cbce45665514" podNamespace="calico-system" podName="calico-node-4gtcd" Aug 5 22:28:13.989768 kubelet[2560]: E0805 22:28:13.989698 2560 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31fcd41d-c61d-49e3-9382-1a1975f360b4" containerName="flexvol-driver" Aug 5 22:28:13.989768 kubelet[2560]: I0805 22:28:13.989736 2560 memory_manager.go:346] "RemoveStaleState removing state" podUID="31fcd41d-c61d-49e3-9382-1a1975f360b4" containerName="flexvol-driver" Aug 5 22:28:13.999038 systemd[1]: Created slice kubepods-besteffort-poddea95c71_e6d6_49c4_89c3_cbce45665514.slice - libcontainer container kubepods-besteffort-poddea95c71_e6d6_49c4_89c3_cbce45665514.slice. Aug 5 22:28:14.113505 kubelet[2560]: I0805 22:28:14.113458 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dea95c71-e6d6-49c4-89c3-cbce45665514-var-run-calico\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.113505 kubelet[2560]: I0805 22:28:14.113507 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dea95c71-e6d6-49c4-89c3-cbce45665514-policysync\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.113505 kubelet[2560]: I0805 22:28:14.113531 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dea95c71-e6d6-49c4-89c3-cbce45665514-var-lib-calico\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.113818 kubelet[2560]: I0805 22:28:14.113555 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dea95c71-e6d6-49c4-89c3-cbce45665514-flexvol-driver-host\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.113818 kubelet[2560]: I0805 22:28:14.113581 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dea95c71-e6d6-49c4-89c3-cbce45665514-xtables-lock\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.113818 kubelet[2560]: I0805 22:28:14.113606 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dea95c71-e6d6-49c4-89c3-cbce45665514-cni-log-dir\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.113818 kubelet[2560]: I0805 22:28:14.113628 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9x8k\" (UniqueName: \"kubernetes.io/projected/dea95c71-e6d6-49c4-89c3-cbce45665514-kube-api-access-x9x8k\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.113818 kubelet[2560]: I0805 22:28:14.113653 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dea95c71-e6d6-49c4-89c3-cbce45665514-lib-modules\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.114000 kubelet[2560]: I0805 22:28:14.113675 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dea95c71-e6d6-49c4-89c3-cbce45665514-node-certs\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.114000 kubelet[2560]: I0805 22:28:14.113714 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dea95c71-e6d6-49c4-89c3-cbce45665514-tigera-ca-bundle\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.114000 kubelet[2560]: I0805 22:28:14.113736 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dea95c71-e6d6-49c4-89c3-cbce45665514-cni-bin-dir\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.114000 kubelet[2560]: I0805 22:28:14.113762 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dea95c71-e6d6-49c4-89c3-cbce45665514-cni-net-dir\") pod \"calico-node-4gtcd\" (UID: \"dea95c71-e6d6-49c4-89c3-cbce45665514\") " pod="calico-system/calico-node-4gtcd" Aug 5 22:28:14.604998 kubelet[2560]: E0805 22:28:14.604555 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:14.606572 containerd[1455]: time="2024-08-05T22:28:14.605967522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4gtcd,Uid:dea95c71-e6d6-49c4-89c3-cbce45665514,Namespace:calico-system,Attempt:0,}" Aug 5 22:28:14.651737 containerd[1455]: time="2024-08-05T22:28:14.650658825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:28:14.651737 containerd[1455]: time="2024-08-05T22:28:14.651616623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:14.651737 containerd[1455]: time="2024-08-05T22:28:14.651643964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:28:14.651737 containerd[1455]: time="2024-08-05T22:28:14.651659323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:14.693274 systemd[1]: Started cri-containerd-db4b1f060abafdfd1dd99f83f8d06ef02c37a788b896dead5cff556fe62d4cff.scope - libcontainer container db4b1f060abafdfd1dd99f83f8d06ef02c37a788b896dead5cff556fe62d4cff. Aug 5 22:28:14.701231 kubelet[2560]: E0805 22:28:14.701178 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:14.737340 containerd[1455]: time="2024-08-05T22:28:14.736751363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4gtcd,Uid:dea95c71-e6d6-49c4-89c3-cbce45665514,Namespace:calico-system,Attempt:0,} returns sandbox id \"db4b1f060abafdfd1dd99f83f8d06ef02c37a788b896dead5cff556fe62d4cff\"" Aug 5 22:28:14.738076 kubelet[2560]: E0805 22:28:14.738047 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:14.741822 containerd[1455]: time="2024-08-05T22:28:14.741741151Z" level=info msg="CreateContainer within sandbox \"db4b1f060abafdfd1dd99f83f8d06ef02c37a788b896dead5cff556fe62d4cff\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:28:14.784146 containerd[1455]: time="2024-08-05T22:28:14.784045482Z" level=info msg="CreateContainer within sandbox \"db4b1f060abafdfd1dd99f83f8d06ef02c37a788b896dead5cff556fe62d4cff\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"598bfc0353559a01e6e430b6c36378d64cec9d886ce85fe17ae1ee9c361aacaf\"" Aug 5 22:28:14.785059 containerd[1455]: time="2024-08-05T22:28:14.784997539Z" level=info msg="StartContainer for \"598bfc0353559a01e6e430b6c36378d64cec9d886ce85fe17ae1ee9c361aacaf\"" Aug 5 22:28:14.830230 systemd[1]: Started cri-containerd-598bfc0353559a01e6e430b6c36378d64cec9d886ce85fe17ae1ee9c361aacaf.scope - libcontainer container 598bfc0353559a01e6e430b6c36378d64cec9d886ce85fe17ae1ee9c361aacaf. Aug 5 22:28:14.896117 containerd[1455]: time="2024-08-05T22:28:14.896061069Z" level=info msg="StartContainer for \"598bfc0353559a01e6e430b6c36378d64cec9d886ce85fe17ae1ee9c361aacaf\" returns successfully" Aug 5 22:28:14.915752 systemd[1]: cri-containerd-598bfc0353559a01e6e430b6c36378d64cec9d886ce85fe17ae1ee9c361aacaf.scope: Deactivated successfully. Aug 5 22:28:14.972932 containerd[1455]: time="2024-08-05T22:28:14.972814003Z" level=info msg="shim disconnected" id=598bfc0353559a01e6e430b6c36378d64cec9d886ce85fe17ae1ee9c361aacaf namespace=k8s.io Aug 5 22:28:14.972932 containerd[1455]: time="2024-08-05T22:28:14.972907629Z" level=warning msg="cleaning up after shim disconnected" id=598bfc0353559a01e6e430b6c36378d64cec9d886ce85fe17ae1ee9c361aacaf namespace=k8s.io Aug 5 22:28:14.972932 containerd[1455]: time="2024-08-05T22:28:14.972919582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:28:15.708248 kubelet[2560]: I0805 22:28:15.707734 2560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="31fcd41d-c61d-49e3-9382-1a1975f360b4" path="/var/lib/kubelet/pods/31fcd41d-c61d-49e3-9382-1a1975f360b4/volumes" Aug 5 22:28:15.823904 kubelet[2560]: E0805 22:28:15.822615 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:15.827323 containerd[1455]: time="2024-08-05T22:28:15.827027143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:28:16.700193 kubelet[2560]: E0805 22:28:16.700114 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:18.700236 kubelet[2560]: E0805 22:28:18.700163 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:19.229390 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:40984.service - OpenSSH per-connection server daemon (10.0.0.1:40984). Aug 5 22:28:19.503855 sshd[3435]: Accepted publickey for core from 10.0.0.1 port 40984 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:19.501867 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:19.529311 systemd-logind[1445]: New session 8 of user core. Aug 5 22:28:19.533560 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:28:19.842171 sshd[3435]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:19.851234 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:40984.service: Deactivated successfully. Aug 5 22:28:19.854369 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:28:19.858046 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:28:19.861964 systemd-logind[1445]: Removed session 8. Aug 5 22:28:20.700416 kubelet[2560]: E0805 22:28:20.700309 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:22.013232 kubelet[2560]: I0805 22:28:22.012830 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:28:22.014036 kubelet[2560]: E0805 22:28:22.013807 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:22.699983 kubelet[2560]: E0805 22:28:22.699855 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:22.838044 kubelet[2560]: E0805 22:28:22.837809 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:24.521102 containerd[1455]: time="2024-08-05T22:28:24.520664213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:24.524868 containerd[1455]: time="2024-08-05T22:28:24.524755800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:28:24.530038 containerd[1455]: time="2024-08-05T22:28:24.528231632Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:24.545056 containerd[1455]: time="2024-08-05T22:28:24.543368405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:24.545056 containerd[1455]: time="2024-08-05T22:28:24.544212068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 8.717115105s" Aug 5 22:28:24.545056 containerd[1455]: time="2024-08-05T22:28:24.544295505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:28:24.550093 containerd[1455]: time="2024-08-05T22:28:24.550030426Z" level=info msg="CreateContainer within sandbox \"db4b1f060abafdfd1dd99f83f8d06ef02c37a788b896dead5cff556fe62d4cff\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:28:24.595285 containerd[1455]: time="2024-08-05T22:28:24.595023211Z" level=info msg="CreateContainer within sandbox \"db4b1f060abafdfd1dd99f83f8d06ef02c37a788b896dead5cff556fe62d4cff\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1b35782c7bff85a3172ee116a8cf2bfc003d651e9c243160b808bb4af981f439\"" Aug 5 22:28:24.596868 containerd[1455]: time="2024-08-05T22:28:24.596808922Z" level=info msg="StartContainer for \"1b35782c7bff85a3172ee116a8cf2bfc003d651e9c243160b808bb4af981f439\"" Aug 5 22:28:24.666638 systemd[1]: Started cri-containerd-1b35782c7bff85a3172ee116a8cf2bfc003d651e9c243160b808bb4af981f439.scope - libcontainer container 1b35782c7bff85a3172ee116a8cf2bfc003d651e9c243160b808bb4af981f439. Aug 5 22:28:24.707533 kubelet[2560]: E0805 22:28:24.707041 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:24.737472 containerd[1455]: time="2024-08-05T22:28:24.736991193Z" level=info msg="StartContainer for \"1b35782c7bff85a3172ee116a8cf2bfc003d651e9c243160b808bb4af981f439\" returns successfully" Aug 5 22:28:24.845948 kubelet[2560]: E0805 22:28:24.845752 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:24.856066 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:48410.service - OpenSSH per-connection server daemon (10.0.0.1:48410). Aug 5 22:28:24.989908 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 48410 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:24.992988 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:25.001131 systemd-logind[1445]: New session 9 of user core. Aug 5 22:28:25.006008 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:28:25.599350 sshd[3497]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:25.603847 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:48410.service: Deactivated successfully. Aug 5 22:28:25.606461 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:28:25.608521 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:28:25.609878 systemd-logind[1445]: Removed session 9. Aug 5 22:28:25.847675 kubelet[2560]: E0805 22:28:25.847595 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:26.700746 kubelet[2560]: E0805 22:28:26.700700 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:28.324643 containerd[1455]: time="2024-08-05T22:28:28.324585122Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:28:28.328034 systemd[1]: cri-containerd-1b35782c7bff85a3172ee116a8cf2bfc003d651e9c243160b808bb4af981f439.scope: Deactivated successfully. Aug 5 22:28:28.348948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b35782c7bff85a3172ee116a8cf2bfc003d651e9c243160b808bb4af981f439-rootfs.mount: Deactivated successfully. Aug 5 22:28:28.388821 kubelet[2560]: I0805 22:28:28.388766 2560 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 22:28:28.419417 containerd[1455]: time="2024-08-05T22:28:28.419318689Z" level=info msg="shim disconnected" id=1b35782c7bff85a3172ee116a8cf2bfc003d651e9c243160b808bb4af981f439 namespace=k8s.io Aug 5 22:28:28.419417 containerd[1455]: time="2024-08-05T22:28:28.419395933Z" level=warning msg="cleaning up after shim disconnected" id=1b35782c7bff85a3172ee116a8cf2bfc003d651e9c243160b808bb4af981f439 namespace=k8s.io Aug 5 22:28:28.419417 containerd[1455]: time="2024-08-05T22:28:28.419406453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:28:28.515567 kubelet[2560]: I0805 22:28:28.515311 2560 topology_manager.go:215] "Topology Admit Handler" podUID="7fe59236-5755-4505-80fa-7fe51da3d40d" podNamespace="kube-system" podName="coredns-5dd5756b68-tmsgz" Aug 5 22:28:28.517521 kubelet[2560]: I0805 22:28:28.517454 2560 topology_manager.go:215] "Topology Admit Handler" podUID="822485eb-5c50-4b95-b82a-b13ac5143fc7" podNamespace="kube-system" podName="coredns-5dd5756b68-6cjgz" Aug 5 22:28:28.518013 kubelet[2560]: I0805 22:28:28.517837 2560 topology_manager.go:215] "Topology Admit Handler" podUID="741e867c-2688-4bc5-8045-f058e5990eb4" podNamespace="calico-system" podName="calico-kube-controllers-77d4c44755-wrv2h" Aug 5 22:28:28.523779 systemd[1]: Created slice kubepods-burstable-pod7fe59236_5755_4505_80fa_7fe51da3d40d.slice - libcontainer container kubepods-burstable-pod7fe59236_5755_4505_80fa_7fe51da3d40d.slice. Aug 5 22:28:28.529654 systemd[1]: Created slice kubepods-burstable-pod822485eb_5c50_4b95_b82a_b13ac5143fc7.slice - libcontainer container kubepods-burstable-pod822485eb_5c50_4b95_b82a_b13ac5143fc7.slice. Aug 5 22:28:28.535305 systemd[1]: Created slice kubepods-besteffort-pod741e867c_2688_4bc5_8045_f058e5990eb4.slice - libcontainer container kubepods-besteffort-pod741e867c_2688_4bc5_8045_f058e5990eb4.slice. Aug 5 22:28:28.638617 kubelet[2560]: I0805 22:28:28.638372 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe59236-5755-4505-80fa-7fe51da3d40d-config-volume\") pod \"coredns-5dd5756b68-tmsgz\" (UID: \"7fe59236-5755-4505-80fa-7fe51da3d40d\") " pod="kube-system/coredns-5dd5756b68-tmsgz" Aug 5 22:28:28.638617 kubelet[2560]: I0805 22:28:28.638433 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2gc\" (UniqueName: \"kubernetes.io/projected/822485eb-5c50-4b95-b82a-b13ac5143fc7-kube-api-access-2p2gc\") pod \"coredns-5dd5756b68-6cjgz\" (UID: \"822485eb-5c50-4b95-b82a-b13ac5143fc7\") " pod="kube-system/coredns-5dd5756b68-6cjgz" Aug 5 22:28:28.638617 kubelet[2560]: I0805 22:28:28.638460 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/822485eb-5c50-4b95-b82a-b13ac5143fc7-config-volume\") pod \"coredns-5dd5756b68-6cjgz\" (UID: \"822485eb-5c50-4b95-b82a-b13ac5143fc7\") " pod="kube-system/coredns-5dd5756b68-6cjgz" Aug 5 22:28:28.638617 kubelet[2560]: I0805 22:28:28.638484 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkfb4\" (UniqueName: \"kubernetes.io/projected/7fe59236-5755-4505-80fa-7fe51da3d40d-kube-api-access-gkfb4\") pod \"coredns-5dd5756b68-tmsgz\" (UID: \"7fe59236-5755-4505-80fa-7fe51da3d40d\") " pod="kube-system/coredns-5dd5756b68-tmsgz" Aug 5 22:28:28.638617 kubelet[2560]: I0805 22:28:28.638554 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/741e867c-2688-4bc5-8045-f058e5990eb4-tigera-ca-bundle\") pod \"calico-kube-controllers-77d4c44755-wrv2h\" (UID: \"741e867c-2688-4bc5-8045-f058e5990eb4\") " pod="calico-system/calico-kube-controllers-77d4c44755-wrv2h" Aug 5 22:28:28.639057 kubelet[2560]: I0805 22:28:28.638596 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7fms\" (UniqueName: \"kubernetes.io/projected/741e867c-2688-4bc5-8045-f058e5990eb4-kube-api-access-l7fms\") pod \"calico-kube-controllers-77d4c44755-wrv2h\" (UID: \"741e867c-2688-4bc5-8045-f058e5990eb4\") " pod="calico-system/calico-kube-controllers-77d4c44755-wrv2h" Aug 5 22:28:28.706908 systemd[1]: Created slice kubepods-besteffort-pod31e0c4e3_71d6_44b3_8e8d_50979a20c140.slice - libcontainer container kubepods-besteffort-pod31e0c4e3_71d6_44b3_8e8d_50979a20c140.slice. Aug 5 22:28:28.709508 containerd[1455]: time="2024-08-05T22:28:28.709451931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tp77v,Uid:31e0c4e3-71d6-44b3-8e8d-50979a20c140,Namespace:calico-system,Attempt:0,}" Aug 5 22:28:28.827822 kubelet[2560]: E0805 22:28:28.827769 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:28.828607 containerd[1455]: time="2024-08-05T22:28:28.828549449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tmsgz,Uid:7fe59236-5755-4505-80fa-7fe51da3d40d,Namespace:kube-system,Attempt:0,}" Aug 5 22:28:28.832671 kubelet[2560]: E0805 22:28:28.832626 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:28.833285 containerd[1455]: time="2024-08-05T22:28:28.833236603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6cjgz,Uid:822485eb-5c50-4b95-b82a-b13ac5143fc7,Namespace:kube-system,Attempt:0,}" Aug 5 22:28:28.838044 containerd[1455]: time="2024-08-05T22:28:28.838005160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d4c44755-wrv2h,Uid:741e867c-2688-4bc5-8045-f058e5990eb4,Namespace:calico-system,Attempt:0,}" Aug 5 22:28:28.859361 kubelet[2560]: E0805 22:28:28.859313 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:28.860027 containerd[1455]: time="2024-08-05T22:28:28.859989245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:28:29.091829 containerd[1455]: time="2024-08-05T22:28:29.091771165Z" level=error msg="Failed to destroy network for sandbox \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.092604 containerd[1455]: time="2024-08-05T22:28:29.092447153Z" level=error msg="encountered an error cleaning up failed sandbox \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.092604 containerd[1455]: time="2024-08-05T22:28:29.092511674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tp77v,Uid:31e0c4e3-71d6-44b3-8e8d-50979a20c140,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.092924 kubelet[2560]: E0805 22:28:29.092868 2560 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.093013 kubelet[2560]: E0805 22:28:29.092962 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tp77v" Aug 5 22:28:29.093013 kubelet[2560]: E0805 22:28:29.092991 2560 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tp77v" Aug 5 22:28:29.093095 kubelet[2560]: E0805 22:28:29.093059 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tp77v_calico-system(31e0c4e3-71d6-44b3-8e8d-50979a20c140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tp77v_calico-system(31e0c4e3-71d6-44b3-8e8d-50979a20c140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:29.104320 containerd[1455]: time="2024-08-05T22:28:29.104210101Z" level=error msg="Failed to destroy network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.106222 containerd[1455]: time="2024-08-05T22:28:29.106164939Z" level=error msg="encountered an error cleaning up failed sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.106425 containerd[1455]: time="2024-08-05T22:28:29.106237786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tmsgz,Uid:7fe59236-5755-4505-80fa-7fe51da3d40d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.106559 kubelet[2560]: E0805 22:28:29.106525 2560 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.106611 kubelet[2560]: E0805 22:28:29.106599 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-tmsgz" Aug 5 22:28:29.106674 kubelet[2560]: E0805 22:28:29.106628 2560 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-tmsgz" Aug 5 22:28:29.106755 kubelet[2560]: E0805 22:28:29.106733 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-tmsgz_kube-system(7fe59236-5755-4505-80fa-7fe51da3d40d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-tmsgz_kube-system(7fe59236-5755-4505-80fa-7fe51da3d40d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-tmsgz" podUID="7fe59236-5755-4505-80fa-7fe51da3d40d" Aug 5 22:28:29.117322 containerd[1455]: time="2024-08-05T22:28:29.117235048Z" level=error msg="Failed to destroy network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.117944 containerd[1455]: time="2024-08-05T22:28:29.117869198Z" level=error msg="encountered an error cleaning up failed sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.117998 containerd[1455]: time="2024-08-05T22:28:29.117946904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d4c44755-wrv2h,Uid:741e867c-2688-4bc5-8045-f058e5990eb4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.118346 kubelet[2560]: E0805 22:28:29.118278 2560 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.118346 kubelet[2560]: E0805 22:28:29.118353 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77d4c44755-wrv2h" Aug 5 22:28:29.118530 kubelet[2560]: E0805 22:28:29.118381 2560 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77d4c44755-wrv2h" Aug 5 22:28:29.118530 kubelet[2560]: E0805 22:28:29.118448 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77d4c44755-wrv2h_calico-system(741e867c-2688-4bc5-8045-f058e5990eb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77d4c44755-wrv2h_calico-system(741e867c-2688-4bc5-8045-f058e5990eb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77d4c44755-wrv2h" podUID="741e867c-2688-4bc5-8045-f058e5990eb4" Aug 5 22:28:29.119546 containerd[1455]: time="2024-08-05T22:28:29.119496841Z" level=error msg="Failed to destroy network for sandbox \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.119983 containerd[1455]: time="2024-08-05T22:28:29.119952476Z" level=error msg="encountered an error cleaning up failed sandbox \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.120024 containerd[1455]: time="2024-08-05T22:28:29.120004584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6cjgz,Uid:822485eb-5c50-4b95-b82a-b13ac5143fc7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.120391 kubelet[2560]: E0805 22:28:29.120333 2560 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.120505 kubelet[2560]: E0805 22:28:29.120405 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-6cjgz" Aug 5 22:28:29.120505 kubelet[2560]: E0805 22:28:29.120430 2560 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-6cjgz" Aug 5 22:28:29.120505 kubelet[2560]: E0805 22:28:29.120489 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-6cjgz_kube-system(822485eb-5c50-4b95-b82a-b13ac5143fc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-6cjgz_kube-system(822485eb-5c50-4b95-b82a-b13ac5143fc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-6cjgz" podUID="822485eb-5c50-4b95-b82a-b13ac5143fc7" Aug 5 22:28:29.350312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2-shm.mount: Deactivated successfully. Aug 5 22:28:29.863151 kubelet[2560]: I0805 22:28:29.862245 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" Aug 5 22:28:29.863622 containerd[1455]: time="2024-08-05T22:28:29.862928443Z" level=info msg="StopPodSandbox for \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\"" Aug 5 22:28:29.863622 containerd[1455]: time="2024-08-05T22:28:29.863225680Z" level=info msg="Ensure that sandbox cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2 in task-service has been cleanup successfully" Aug 5 22:28:29.863911 kubelet[2560]: I0805 22:28:29.863220 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:29.864419 containerd[1455]: time="2024-08-05T22:28:29.864160645Z" level=info msg="StopPodSandbox for \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\"" Aug 5 22:28:29.864419 containerd[1455]: time="2024-08-05T22:28:29.864411986Z" level=info msg="Ensure that sandbox ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021 in task-service has been cleanup successfully" Aug 5 22:28:29.865037 kubelet[2560]: I0805 22:28:29.864978 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" Aug 5 22:28:29.865450 containerd[1455]: time="2024-08-05T22:28:29.865413726Z" level=info msg="StopPodSandbox for \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\"" Aug 5 22:28:29.865695 containerd[1455]: time="2024-08-05T22:28:29.865599524Z" level=info msg="Ensure that sandbox dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974 in task-service has been cleanup successfully" Aug 5 22:28:29.866564 kubelet[2560]: I0805 22:28:29.866533 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:29.867948 containerd[1455]: time="2024-08-05T22:28:29.867261131Z" level=info msg="StopPodSandbox for \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\"" Aug 5 22:28:29.867948 containerd[1455]: time="2024-08-05T22:28:29.867497334Z" level=info msg="Ensure that sandbox 2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6 in task-service has been cleanup successfully" Aug 5 22:28:29.906044 containerd[1455]: time="2024-08-05T22:28:29.905975104Z" level=error msg="StopPodSandbox for \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\" failed" error="failed to destroy network for sandbox \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.906267 containerd[1455]: time="2024-08-05T22:28:29.906206708Z" level=error msg="StopPodSandbox for \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\" failed" error="failed to destroy network for sandbox \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.906428 kubelet[2560]: E0805 22:28:29.906391 2560 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" Aug 5 22:28:29.906514 kubelet[2560]: E0805 22:28:29.906461 2560 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2"} Aug 5 22:28:29.906580 kubelet[2560]: E0805 22:28:29.906511 2560 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31e0c4e3-71d6-44b3-8e8d-50979a20c140\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:28:29.906580 kubelet[2560]: E0805 22:28:29.906551 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31e0c4e3-71d6-44b3-8e8d-50979a20c140\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tp77v" podUID="31e0c4e3-71d6-44b3-8e8d-50979a20c140" Aug 5 22:28:29.906780 kubelet[2560]: E0805 22:28:29.906637 2560 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" Aug 5 22:28:29.906780 kubelet[2560]: E0805 22:28:29.906656 2560 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974"} Aug 5 22:28:29.906859 kubelet[2560]: E0805 22:28:29.906792 2560 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"822485eb-5c50-4b95-b82a-b13ac5143fc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:28:29.906859 kubelet[2560]: E0805 22:28:29.906833 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"822485eb-5c50-4b95-b82a-b13ac5143fc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-6cjgz" podUID="822485eb-5c50-4b95-b82a-b13ac5143fc7" Aug 5 22:28:29.910602 containerd[1455]: time="2024-08-05T22:28:29.910549285Z" level=error msg="StopPodSandbox for \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\" failed" error="failed to destroy network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.910812 kubelet[2560]: E0805 22:28:29.910778 2560 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:29.910812 kubelet[2560]: E0805 22:28:29.910806 2560 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6"} Aug 5 22:28:29.910912 kubelet[2560]: E0805 22:28:29.910844 2560 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fe59236-5755-4505-80fa-7fe51da3d40d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:28:29.910912 kubelet[2560]: E0805 22:28:29.910881 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fe59236-5755-4505-80fa-7fe51da3d40d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-tmsgz" podUID="7fe59236-5755-4505-80fa-7fe51da3d40d" Aug 5 22:28:29.913371 containerd[1455]: time="2024-08-05T22:28:29.913307200Z" level=error msg="StopPodSandbox for \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\" failed" error="failed to destroy network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:28:29.913638 kubelet[2560]: E0805 22:28:29.913606 2560 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:29.913737 kubelet[2560]: E0805 22:28:29.913648 2560 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021"} Aug 5 22:28:29.913737 kubelet[2560]: E0805 22:28:29.913721 2560 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"741e867c-2688-4bc5-8045-f058e5990eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:28:29.913896 kubelet[2560]: E0805 22:28:29.913761 2560 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"741e867c-2688-4bc5-8045-f058e5990eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77d4c44755-wrv2h" podUID="741e867c-2688-4bc5-8045-f058e5990eb4" Aug 5 22:28:30.613641 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:59508.service - OpenSSH per-connection server daemon (10.0.0.1:59508). Aug 5 22:28:30.689600 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 59508 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:30.691528 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:30.696739 systemd-logind[1445]: New session 10 of user core. Aug 5 22:28:30.703892 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:28:30.858973 sshd[3786]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:30.862080 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:59508.service: Deactivated successfully. Aug 5 22:28:30.864475 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:28:30.866248 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:28:30.867195 systemd-logind[1445]: Removed session 10. Aug 5 22:28:33.706573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3655466576.mount: Deactivated successfully. Aug 5 22:28:35.252434 containerd[1455]: time="2024-08-05T22:28:35.252339696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:35.307308 containerd[1455]: time="2024-08-05T22:28:35.307227653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:28:35.354267 containerd[1455]: time="2024-08-05T22:28:35.354172024Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:35.370781 containerd[1455]: time="2024-08-05T22:28:35.370663775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:35.371494 containerd[1455]: time="2024-08-05T22:28:35.371433783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 6.51139342s" Aug 5 22:28:35.371494 containerd[1455]: time="2024-08-05T22:28:35.371487356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:28:35.381347 containerd[1455]: time="2024-08-05T22:28:35.381288273Z" level=info msg="CreateContainer within sandbox \"db4b1f060abafdfd1dd99f83f8d06ef02c37a788b896dead5cff556fe62d4cff\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:28:35.821263 containerd[1455]: time="2024-08-05T22:28:35.821167775Z" level=info msg="CreateContainer within sandbox \"db4b1f060abafdfd1dd99f83f8d06ef02c37a788b896dead5cff556fe62d4cff\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"255a9310b1e7d32546b4fbb2ba244b0bd833a683197d3e15218d6aa34c13355a\"" Aug 5 22:28:35.823713 containerd[1455]: time="2024-08-05T22:28:35.822915845Z" level=info msg="StartContainer for \"255a9310b1e7d32546b4fbb2ba244b0bd833a683197d3e15218d6aa34c13355a\"" Aug 5 22:28:35.886172 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:59514.service - OpenSSH per-connection server daemon (10.0.0.1:59514). Aug 5 22:28:35.911950 systemd[1]: Started cri-containerd-255a9310b1e7d32546b4fbb2ba244b0bd833a683197d3e15218d6aa34c13355a.scope - libcontainer container 255a9310b1e7d32546b4fbb2ba244b0bd833a683197d3e15218d6aa34c13355a. Aug 5 22:28:36.025235 sshd[3816]: Accepted publickey for core from 10.0.0.1 port 59514 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:36.026362 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:36.034044 systemd-logind[1445]: New session 11 of user core. Aug 5 22:28:36.049124 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:28:36.062085 containerd[1455]: time="2024-08-05T22:28:36.062029670Z" level=info msg="StartContainer for \"255a9310b1e7d32546b4fbb2ba244b0bd833a683197d3e15218d6aa34c13355a\" returns successfully" Aug 5 22:28:36.163599 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:28:36.163941 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:28:36.324747 sshd[3816]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:36.330829 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:59514.service: Deactivated successfully. Aug 5 22:28:36.334001 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:28:36.335433 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:28:36.337230 systemd-logind[1445]: Removed session 11. Aug 5 22:28:36.888113 kubelet[2560]: E0805 22:28:36.888063 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:37.120115 kubelet[2560]: I0805 22:28:37.118103 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-4gtcd" podStartSLOduration=4.572563539 podCreationTimestamp="2024-08-05 22:28:13 +0000 UTC" firstStartedPulling="2024-08-05 22:28:15.826288406 +0000 UTC m=+30.242966660" lastFinishedPulling="2024-08-05 22:28:35.371790813 +0000 UTC m=+49.788469067" observedRunningTime="2024-08-05 22:28:37.117888073 +0000 UTC m=+51.534566337" watchObservedRunningTime="2024-08-05 22:28:37.118065946 +0000 UTC m=+51.534744211" Aug 5 22:28:39.456041 systemd-networkd[1399]: vxlan.calico: Link UP Aug 5 22:28:39.456055 systemd-networkd[1399]: vxlan.calico: Gained carrier Aug 5 22:28:40.592907 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Aug 5 22:28:41.344217 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:60714.service - OpenSSH per-connection server daemon (10.0.0.1:60714). Aug 5 22:28:41.397876 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 60714 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:41.399732 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:41.404853 systemd-logind[1445]: New session 12 of user core. Aug 5 22:28:41.416887 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:28:41.552648 sshd[4094]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:41.570880 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:60714.service: Deactivated successfully. Aug 5 22:28:41.573578 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:28:41.575781 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:28:41.577743 kubelet[2560]: I0805 22:28:41.577514 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:28:41.578876 kubelet[2560]: E0805 22:28:41.578564 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:41.583276 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:60728.service - OpenSSH per-connection server daemon (10.0.0.1:60728). Aug 5 22:28:41.586646 systemd-logind[1445]: Removed session 12. Aug 5 22:28:41.627811 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 60728 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:41.628429 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:41.636400 systemd-logind[1445]: New session 13 of user core. Aug 5 22:28:41.639891 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:28:41.651261 systemd[1]: run-containerd-runc-k8s.io-255a9310b1e7d32546b4fbb2ba244b0bd833a683197d3e15218d6aa34c13355a-runc.JEU2Cl.mount: Deactivated successfully. Aug 5 22:28:41.896399 kubelet[2560]: E0805 22:28:41.896350 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:41.984916 sshd[4109]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:41.993087 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:60728.service: Deactivated successfully. Aug 5 22:28:41.995529 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:28:41.998915 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:28:42.010247 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:60730.service - OpenSSH per-connection server daemon (10.0.0.1:60730). Aug 5 22:28:42.015238 systemd-logind[1445]: Removed session 13. Aug 5 22:28:42.068272 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 60730 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:42.070562 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:42.076655 systemd-logind[1445]: New session 14 of user core. Aug 5 22:28:42.083098 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:28:42.213076 sshd[4169]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:42.217985 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:60730.service: Deactivated successfully. Aug 5 22:28:42.220467 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:28:42.221303 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:28:42.222416 systemd-logind[1445]: Removed session 14. Aug 5 22:28:42.701073 containerd[1455]: time="2024-08-05T22:28:42.701001531Z" level=info msg="StopPodSandbox for \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\"" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:42.896 [INFO][4199] k8s.go 608: Cleaning up netns ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:42.897 [INFO][4199] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" iface="eth0" netns="/var/run/netns/cni-7a01f6bb-5df1-e409-f6a0-70ecd91e1705" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:42.897 [INFO][4199] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" iface="eth0" netns="/var/run/netns/cni-7a01f6bb-5df1-e409-f6a0-70ecd91e1705" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:42.898 [INFO][4199] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" iface="eth0" netns="/var/run/netns/cni-7a01f6bb-5df1-e409-f6a0-70ecd91e1705" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:42.898 [INFO][4199] k8s.go 615: Releasing IP address(es) ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:42.898 [INFO][4199] utils.go 188: Calico CNI releasing IP address ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:43.024 [INFO][4207] ipam_plugin.go 411: Releasing address using handleID ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" HandleID="k8s-pod-network.ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:43.024 [INFO][4207] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:43.025 [INFO][4207] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:43.041 [WARNING][4207] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" HandleID="k8s-pod-network.ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:43.041 [INFO][4207] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" HandleID="k8s-pod-network.ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:43.043 [INFO][4207] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:43.050107 containerd[1455]: 2024-08-05 22:28:43.046 [INFO][4199] k8s.go 621: Teardown processing complete. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:43.050623 containerd[1455]: time="2024-08-05T22:28:43.050247007Z" level=info msg="TearDown network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\" successfully" Aug 5 22:28:43.050623 containerd[1455]: time="2024-08-05T22:28:43.050284759Z" level=info msg="StopPodSandbox for \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\" returns successfully" Aug 5 22:28:43.053458 systemd[1]: run-netns-cni\x2d7a01f6bb\x2d5df1\x2de409\x2df6a0\x2d70ecd91e1705.mount: Deactivated successfully. Aug 5 22:28:43.054119 containerd[1455]: time="2024-08-05T22:28:43.053558902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d4c44755-wrv2h,Uid:741e867c-2688-4bc5-8045-f058e5990eb4,Namespace:calico-system,Attempt:1,}" Aug 5 22:28:43.359811 systemd-networkd[1399]: cali8f2a01c1cbe: Link UP Aug 5 22:28:43.360145 systemd-networkd[1399]: cali8f2a01c1cbe: Gained carrier Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.271 [INFO][4216] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0 calico-kube-controllers-77d4c44755- calico-system 741e867c-2688-4bc5-8045-f058e5990eb4 859 0 2024-08-05 22:28:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77d4c44755 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77d4c44755-wrv2h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8f2a01c1cbe [] []}} ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Namespace="calico-system" Pod="calico-kube-controllers-77d4c44755-wrv2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.271 [INFO][4216] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Namespace="calico-system" Pod="calico-kube-controllers-77d4c44755-wrv2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.308 [INFO][4229] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" HandleID="k8s-pod-network.3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.318 [INFO][4229] ipam_plugin.go 264: Auto assigning IP ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" HandleID="k8s-pod-network.3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000328050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77d4c44755-wrv2h", "timestamp":"2024-08-05 22:28:43.308556494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.318 [INFO][4229] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.318 [INFO][4229] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.318 [INFO][4229] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.320 [INFO][4229] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" host="localhost" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.328 [INFO][4229] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.333 [INFO][4229] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.335 [INFO][4229] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.338 [INFO][4229] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.338 [INFO][4229] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" host="localhost" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.341 [INFO][4229] ipam.go 1685: Creating new handle: k8s-pod-network.3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.345 [INFO][4229] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" host="localhost" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.350 [INFO][4229] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" host="localhost" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.350 [INFO][4229] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" host="localhost" Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.351 [INFO][4229] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:43.374173 containerd[1455]: 2024-08-05 22:28:43.351 [INFO][4229] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" HandleID="k8s-pod-network.3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.374797 containerd[1455]: 2024-08-05 22:28:43.354 [INFO][4216] k8s.go 386: Populated endpoint ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Namespace="calico-system" Pod="calico-kube-controllers-77d4c44755-wrv2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0", GenerateName:"calico-kube-controllers-77d4c44755-", Namespace:"calico-system", SelfLink:"", UID:"741e867c-2688-4bc5-8045-f058e5990eb4", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 28, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d4c44755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77d4c44755-wrv2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f2a01c1cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:43.374797 containerd[1455]: 2024-08-05 22:28:43.354 [INFO][4216] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Namespace="calico-system" Pod="calico-kube-controllers-77d4c44755-wrv2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.374797 containerd[1455]: 2024-08-05 22:28:43.354 [INFO][4216] dataplane_linux.go 68: Setting the host side veth name to cali8f2a01c1cbe ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Namespace="calico-system" Pod="calico-kube-controllers-77d4c44755-wrv2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.374797 containerd[1455]: 2024-08-05 22:28:43.360 [INFO][4216] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Namespace="calico-system" Pod="calico-kube-controllers-77d4c44755-wrv2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.374797 containerd[1455]: 2024-08-05 22:28:43.360 [INFO][4216] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Namespace="calico-system" Pod="calico-kube-controllers-77d4c44755-wrv2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0", GenerateName:"calico-kube-controllers-77d4c44755-", Namespace:"calico-system", SelfLink:"", UID:"741e867c-2688-4bc5-8045-f058e5990eb4", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 28, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d4c44755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c", Pod:"calico-kube-controllers-77d4c44755-wrv2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f2a01c1cbe", MAC:"fe:05:4e:53:89:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:43.374797 containerd[1455]: 2024-08-05 22:28:43.369 [INFO][4216] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c" Namespace="calico-system" Pod="calico-kube-controllers-77d4c44755-wrv2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:43.431483 containerd[1455]: time="2024-08-05T22:28:43.431338591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:28:43.431483 containerd[1455]: time="2024-08-05T22:28:43.431424255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:43.431483 containerd[1455]: time="2024-08-05T22:28:43.431443232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:28:43.431483 containerd[1455]: time="2024-08-05T22:28:43.431455395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:43.467064 systemd[1]: Started cri-containerd-3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c.scope - libcontainer container 3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c. Aug 5 22:28:43.490098 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:28:43.523199 containerd[1455]: time="2024-08-05T22:28:43.523146252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d4c44755-wrv2h,Uid:741e867c-2688-4bc5-8045-f058e5990eb4,Namespace:calico-system,Attempt:1,} returns sandbox id \"3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c\"" Aug 5 22:28:43.525387 containerd[1455]: time="2024-08-05T22:28:43.525350118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:28:43.701282 containerd[1455]: time="2024-08-05T22:28:43.701211069Z" level=info msg="StopPodSandbox for \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\"" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.754 [INFO][4311] k8s.go 608: Cleaning up netns ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.754 [INFO][4311] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" iface="eth0" netns="/var/run/netns/cni-bf6b9f0c-7028-639d-28bd-eed5a8a4cab2" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.755 [INFO][4311] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" iface="eth0" netns="/var/run/netns/cni-bf6b9f0c-7028-639d-28bd-eed5a8a4cab2" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.755 [INFO][4311] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" iface="eth0" netns="/var/run/netns/cni-bf6b9f0c-7028-639d-28bd-eed5a8a4cab2" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.755 [INFO][4311] k8s.go 615: Releasing IP address(es) ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.755 [INFO][4311] utils.go 188: Calico CNI releasing IP address ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.773 [INFO][4319] ipam_plugin.go 411: Releasing address using handleID ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" HandleID="k8s-pod-network.2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.773 [INFO][4319] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.773 [INFO][4319] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.779 [WARNING][4319] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" HandleID="k8s-pod-network.2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.780 [INFO][4319] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" HandleID="k8s-pod-network.2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.781 [INFO][4319] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:43.787163 containerd[1455]: 2024-08-05 22:28:43.784 [INFO][4311] k8s.go 621: Teardown processing complete. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:43.787804 containerd[1455]: time="2024-08-05T22:28:43.787374835Z" level=info msg="TearDown network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\" successfully" Aug 5 22:28:43.787804 containerd[1455]: time="2024-08-05T22:28:43.787413078Z" level=info msg="StopPodSandbox for \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\" returns successfully" Aug 5 22:28:43.787882 kubelet[2560]: E0805 22:28:43.787842 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:43.788481 containerd[1455]: time="2024-08-05T22:28:43.788414733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tmsgz,Uid:7fe59236-5755-4505-80fa-7fe51da3d40d,Namespace:kube-system,Attempt:1,}" Aug 5 22:28:44.053602 systemd[1]: run-netns-cni\x2dbf6b9f0c\x2d7028\x2d639d\x2d28bd\x2deed5a8a4cab2.mount: Deactivated successfully. Aug 5 22:28:44.285633 systemd-networkd[1399]: calie68bbc65821: Link UP Aug 5 22:28:44.287812 systemd-networkd[1399]: calie68bbc65821: Gained carrier Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.168 [INFO][4327] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--tmsgz-eth0 coredns-5dd5756b68- kube-system 7fe59236-5755-4505-80fa-7fe51da3d40d 868 0 2024-08-05 22:27:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-tmsgz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie68bbc65821 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Namespace="kube-system" Pod="coredns-5dd5756b68-tmsgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tmsgz-" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.172 [INFO][4327] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Namespace="kube-system" Pod="coredns-5dd5756b68-tmsgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.233 [INFO][4341] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" HandleID="k8s-pod-network.e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.244 [INFO][4341] ipam_plugin.go 264: Auto assigning IP ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" HandleID="k8s-pod-network.e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001287b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-tmsgz", "timestamp":"2024-08-05 22:28:44.233416411 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.245 [INFO][4341] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.245 [INFO][4341] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.245 [INFO][4341] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.247 [INFO][4341] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" host="localhost" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.251 [INFO][4341] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.257 [INFO][4341] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.259 [INFO][4341] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.263 [INFO][4341] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.263 [INFO][4341] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" host="localhost" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.265 [INFO][4341] ipam.go 1685: Creating new handle: k8s-pod-network.e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.269 [INFO][4341] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" host="localhost" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.276 [INFO][4341] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" host="localhost" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.277 [INFO][4341] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" host="localhost" Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.277 [INFO][4341] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:44.301760 containerd[1455]: 2024-08-05 22:28:44.277 [INFO][4341] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" HandleID="k8s-pod-network.e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:44.302362 containerd[1455]: 2024-08-05 22:28:44.281 [INFO][4327] k8s.go 386: Populated endpoint ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Namespace="kube-system" Pod="coredns-5dd5756b68-tmsgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--tmsgz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7fe59236-5755-4505-80fa-7fe51da3d40d", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 27, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-tmsgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie68bbc65821", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:44.302362 containerd[1455]: 2024-08-05 22:28:44.281 [INFO][4327] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Namespace="kube-system" Pod="coredns-5dd5756b68-tmsgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:44.302362 containerd[1455]: 2024-08-05 22:28:44.281 [INFO][4327] dataplane_linux.go 68: Setting the host side veth name to calie68bbc65821 ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Namespace="kube-system" Pod="coredns-5dd5756b68-tmsgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:44.302362 containerd[1455]: 2024-08-05 22:28:44.285 [INFO][4327] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Namespace="kube-system" Pod="coredns-5dd5756b68-tmsgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:44.302362 containerd[1455]: 2024-08-05 22:28:44.286 [INFO][4327] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Namespace="kube-system" Pod="coredns-5dd5756b68-tmsgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--tmsgz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7fe59236-5755-4505-80fa-7fe51da3d40d", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 27, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d", Pod:"coredns-5dd5756b68-tmsgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie68bbc65821", MAC:"0e:51:e6:17:47:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:44.302362 containerd[1455]: 2024-08-05 22:28:44.293 [INFO][4327] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d" Namespace="kube-system" Pod="coredns-5dd5756b68-tmsgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:44.330467 containerd[1455]: time="2024-08-05T22:28:44.328658515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:28:44.330467 containerd[1455]: time="2024-08-05T22:28:44.329912153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:44.330467 containerd[1455]: time="2024-08-05T22:28:44.329955216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:28:44.330467 containerd[1455]: time="2024-08-05T22:28:44.329971478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:44.365079 systemd[1]: Started cri-containerd-e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d.scope - libcontainer container e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d. Aug 5 22:28:44.384790 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:28:44.420728 containerd[1455]: time="2024-08-05T22:28:44.419694311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tmsgz,Uid:7fe59236-5755-4505-80fa-7fe51da3d40d,Namespace:kube-system,Attempt:1,} returns sandbox id \"e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d\"" Aug 5 22:28:44.420922 kubelet[2560]: E0805 22:28:44.420614 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:44.423554 containerd[1455]: time="2024-08-05T22:28:44.423490071Z" level=info msg="CreateContainer within sandbox \"e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:28:44.432965 systemd-networkd[1399]: cali8f2a01c1cbe: Gained IPv6LL Aug 5 22:28:44.450316 containerd[1455]: time="2024-08-05T22:28:44.450243452Z" level=info msg="CreateContainer within sandbox \"e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c4d430a8ebeb2b268cf3515acb118bd34f40838ae4f99dab92e812ed9758fed9\"" Aug 5 22:28:44.451733 containerd[1455]: time="2024-08-05T22:28:44.451637449Z" level=info msg="StartContainer for \"c4d430a8ebeb2b268cf3515acb118bd34f40838ae4f99dab92e812ed9758fed9\"" Aug 5 22:28:44.486068 systemd[1]: Started cri-containerd-c4d430a8ebeb2b268cf3515acb118bd34f40838ae4f99dab92e812ed9758fed9.scope - libcontainer container c4d430a8ebeb2b268cf3515acb118bd34f40838ae4f99dab92e812ed9758fed9. Aug 5 22:28:44.522628 containerd[1455]: time="2024-08-05T22:28:44.522568584Z" level=info msg="StartContainer for \"c4d430a8ebeb2b268cf3515acb118bd34f40838ae4f99dab92e812ed9758fed9\" returns successfully" Aug 5 22:28:44.700735 containerd[1455]: time="2024-08-05T22:28:44.700541696Z" level=info msg="StopPodSandbox for \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\"" Aug 5 22:28:44.700735 containerd[1455]: time="2024-08-05T22:28:44.700597262Z" level=info msg="StopPodSandbox for \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\"" Aug 5 22:28:44.905814 kubelet[2560]: E0805 22:28:44.905563 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:45.054190 systemd[1]: run-containerd-runc-k8s.io-e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d-runc.QhR3XR.mount: Deactivated successfully. Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:44.887 [INFO][4476] k8s.go 608: Cleaning up netns ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:44.887 [INFO][4476] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" iface="eth0" netns="/var/run/netns/cni-26cea631-fb7b-5e04-51b4-1fb40bbf39f2" Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:44.887 [INFO][4476] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" iface="eth0" netns="/var/run/netns/cni-26cea631-fb7b-5e04-51b4-1fb40bbf39f2" Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:44.888 [INFO][4476] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" iface="eth0" netns="/var/run/netns/cni-26cea631-fb7b-5e04-51b4-1fb40bbf39f2" Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:44.888 [INFO][4476] k8s.go 615: Releasing IP address(es) ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:44.888 [INFO][4476] utils.go 188: Calico CNI releasing IP address ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:45.332 [INFO][4491] ipam_plugin.go 411: Releasing address using handleID ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" HandleID="k8s-pod-network.cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" Workload="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:45.332 [INFO][4491] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:45.332 [INFO][4491] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:45.337 [WARNING][4491] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" HandleID="k8s-pod-network.cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" Workload="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:45.337 [INFO][4491] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" HandleID="k8s-pod-network.cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" Workload="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:45.338 [INFO][4491] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:45.342776 containerd[1455]: 2024-08-05 22:28:45.340 [INFO][4476] k8s.go 621: Teardown processing complete. ContainerID="cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2" Aug 5 22:28:45.343521 containerd[1455]: time="2024-08-05T22:28:45.342901863Z" level=info msg="TearDown network for sandbox \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\" successfully" Aug 5 22:28:45.343521 containerd[1455]: time="2024-08-05T22:28:45.342943664Z" level=info msg="StopPodSandbox for \"cc97e728f3119b403efab851c66364f9c11a56379522097c2fba84e4bd9939d2\" returns successfully" Aug 5 22:28:45.344722 containerd[1455]: time="2024-08-05T22:28:45.343658847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tp77v,Uid:31e0c4e3-71d6-44b3-8e8d-50979a20c140,Namespace:calico-system,Attempt:1,}" Aug 5 22:28:45.346173 systemd[1]: run-netns-cni\x2d26cea631\x2dfb7b\x2d5e04\x2d51b4\x2d1fb40bbf39f2.mount: Deactivated successfully. Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:44.919 [INFO][4475] k8s.go 608: Cleaning up netns ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:44.920 [INFO][4475] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" iface="eth0" netns="/var/run/netns/cni-c24f53c2-b63c-4b27-3009-1d6c52e6b192" Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:44.920 [INFO][4475] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" iface="eth0" netns="/var/run/netns/cni-c24f53c2-b63c-4b27-3009-1d6c52e6b192" Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:44.920 [INFO][4475] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" iface="eth0" netns="/var/run/netns/cni-c24f53c2-b63c-4b27-3009-1d6c52e6b192" Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:44.920 [INFO][4475] k8s.go 615: Releasing IP address(es) ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:44.920 [INFO][4475] utils.go 188: Calico CNI releasing IP address ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:45.379 [INFO][4497] ipam_plugin.go 411: Releasing address using handleID ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" HandleID="k8s-pod-network.dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" Workload="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:45.380 [INFO][4497] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:45.380 [INFO][4497] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:45.385 [WARNING][4497] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" HandleID="k8s-pod-network.dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" Workload="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:45.385 [INFO][4497] ipam_plugin.go 439: Releasing address using workloadID ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" HandleID="k8s-pod-network.dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" Workload="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:45.387 [INFO][4497] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:45.392851 containerd[1455]: 2024-08-05 22:28:45.390 [INFO][4475] k8s.go 621: Teardown processing complete. ContainerID="dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974" Aug 5 22:28:45.393707 containerd[1455]: time="2024-08-05T22:28:45.393638274Z" level=info msg="TearDown network for sandbox \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\" successfully" Aug 5 22:28:45.393778 containerd[1455]: time="2024-08-05T22:28:45.393759757Z" level=info msg="StopPodSandbox for \"dd916514fe37f6f482635dc8848bbb754a683e567ae10232834b37e81ad5e974\" returns successfully" Aug 5 22:28:45.394196 kubelet[2560]: E0805 22:28:45.394171 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:45.394957 containerd[1455]: time="2024-08-05T22:28:45.394905756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6cjgz,Uid:822485eb-5c50-4b95-b82a-b13ac5143fc7,Namespace:kube-system,Attempt:1,}" Aug 5 22:28:45.396640 systemd[1]: run-netns-cni\x2dc24f53c2\x2db63c\x2d4b27\x2d3009\x2d1d6c52e6b192.mount: Deactivated successfully. Aug 5 22:28:45.697157 containerd[1455]: time="2024-08-05T22:28:45.697100327Z" level=info msg="StopPodSandbox for \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\"" Aug 5 22:28:45.697355 containerd[1455]: time="2024-08-05T22:28:45.697231348Z" level=info msg="TearDown network for sandbox \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\" successfully" Aug 5 22:28:45.697355 containerd[1455]: time="2024-08-05T22:28:45.697249203Z" level=info msg="StopPodSandbox for \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\" returns successfully" Aug 5 22:28:45.772942 containerd[1455]: time="2024-08-05T22:28:45.772841659Z" level=info msg="RemovePodSandbox for \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\"" Aug 5 22:28:45.776655 containerd[1455]: time="2024-08-05T22:28:45.776524177Z" level=info msg="Forcibly stopping sandbox \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\"" Aug 5 22:28:45.777332 systemd-networkd[1399]: calie68bbc65821: Gained IPv6LL Aug 5 22:28:45.784849 containerd[1455]: time="2024-08-05T22:28:45.776664177Z" level=info msg="TearDown network for sandbox \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\" successfully" Aug 5 22:28:45.850450 containerd[1455]: time="2024-08-05T22:28:45.848888789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:28:45.851252 containerd[1455]: time="2024-08-05T22:28:45.849074596Z" level=info msg="RemovePodSandbox \"4786a465d7e94d2e06467a84e81375245ed9f668e9022897849ec95a7e8f5823\" returns successfully" Aug 5 22:28:45.851856 containerd[1455]: time="2024-08-05T22:28:45.851824013Z" level=info msg="StopPodSandbox for \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\"" Aug 5 22:28:45.908431 kubelet[2560]: E0805 22:28:45.908120 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:45.922971 kubelet[2560]: I0805 22:28:45.922910 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-tmsgz" podStartSLOduration=46.922861186 podCreationTimestamp="2024-08-05 22:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:28:45.072150974 +0000 UTC m=+59.488829228" watchObservedRunningTime="2024-08-05 22:28:45.922861186 +0000 UTC m=+60.339539440" Aug 5 22:28:45.982507 systemd-networkd[1399]: cali58d6d17ebc3: Link UP Aug 5 22:28:45.986005 systemd-networkd[1399]: cali58d6d17ebc3: Gained carrier Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.843 [INFO][4512] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tp77v-eth0 csi-node-driver- calico-system 31e0c4e3-71d6-44b3-8e8d-50979a20c140 884 0 2024-08-05 22:28:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-tp77v eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali58d6d17ebc3 [] []}} ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Namespace="calico-system" Pod="csi-node-driver-tp77v" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp77v-" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.844 [INFO][4512] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Namespace="calico-system" Pod="csi-node-driver-tp77v" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.896 [INFO][4544] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" HandleID="k8s-pod-network.a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Workload="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.915 [INFO][4544] ipam_plugin.go 264: Auto assigning IP ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" HandleID="k8s-pod-network.a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Workload="localhost-k8s-csi--node--driver--tp77v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c0d40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tp77v", "timestamp":"2024-08-05 22:28:45.896207239 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.916 [INFO][4544] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.916 [INFO][4544] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.916 [INFO][4544] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.919 [INFO][4544] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" host="localhost" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.929 [INFO][4544] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.941 [INFO][4544] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.944 [INFO][4544] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.948 [INFO][4544] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.948 [INFO][4544] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" host="localhost" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.952 [INFO][4544] ipam.go 1685: Creating new handle: k8s-pod-network.a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.957 [INFO][4544] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" host="localhost" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.963 [INFO][4544] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" host="localhost" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.963 [INFO][4544] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" host="localhost" Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.963 [INFO][4544] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:46.007341 containerd[1455]: 2024-08-05 22:28:45.964 [INFO][4544] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" HandleID="k8s-pod-network.a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Workload="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:46.008455 containerd[1455]: 2024-08-05 22:28:45.976 [INFO][4512] k8s.go 386: Populated endpoint ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Namespace="calico-system" Pod="csi-node-driver-tp77v" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp77v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tp77v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31e0c4e3-71d6-44b3-8e8d-50979a20c140", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 28, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tp77v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali58d6d17ebc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:46.008455 containerd[1455]: 2024-08-05 22:28:45.976 [INFO][4512] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Namespace="calico-system" Pod="csi-node-driver-tp77v" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:46.008455 containerd[1455]: 2024-08-05 22:28:45.976 [INFO][4512] dataplane_linux.go 68: Setting the host side veth name to cali58d6d17ebc3 ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Namespace="calico-system" Pod="csi-node-driver-tp77v" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:46.008455 containerd[1455]: 2024-08-05 22:28:45.988 [INFO][4512] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Namespace="calico-system" Pod="csi-node-driver-tp77v" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:46.008455 containerd[1455]: 2024-08-05 22:28:45.989 [INFO][4512] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Namespace="calico-system" Pod="csi-node-driver-tp77v" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp77v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tp77v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31e0c4e3-71d6-44b3-8e8d-50979a20c140", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 28, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f", Pod:"csi-node-driver-tp77v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali58d6d17ebc3", MAC:"5a:6c:86:01:e9:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:46.008455 containerd[1455]: 2024-08-05 22:28:46.002 [INFO][4512] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f" Namespace="calico-system" Pod="csi-node-driver-tp77v" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp77v-eth0" Aug 5 22:28:46.041364 systemd-networkd[1399]: calib169e5c022b: Link UP Aug 5 22:28:46.041647 systemd-networkd[1399]: calib169e5c022b: Gained carrier Aug 5 22:28:46.060571 containerd[1455]: time="2024-08-05T22:28:46.059352761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:28:46.060571 containerd[1455]: time="2024-08-05T22:28:46.059433165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:46.060571 containerd[1455]: time="2024-08-05T22:28:46.059454116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:28:46.060571 containerd[1455]: time="2024-08-05T22:28:46.059468563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.855 [INFO][4521] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--6cjgz-eth0 coredns-5dd5756b68- kube-system 822485eb-5c50-4b95-b82a-b13ac5143fc7 885 0 2024-08-05 22:27:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-6cjgz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib169e5c022b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Namespace="kube-system" Pod="coredns-5dd5756b68-6cjgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--6cjgz-" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.856 [INFO][4521] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Namespace="kube-system" Pod="coredns-5dd5756b68-6cjgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.899 [INFO][4558] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" HandleID="k8s-pod-network.fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Workload="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.919 [INFO][4558] ipam_plugin.go 264: Auto assigning IP ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" HandleID="k8s-pod-network.fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Workload="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4960), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-6cjgz", "timestamp":"2024-08-05 22:28:45.899197739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.919 [INFO][4558] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.963 [INFO][4558] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.964 [INFO][4558] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.968 [INFO][4558] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" host="localhost" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.975 [INFO][4558] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.989 [INFO][4558] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:45.998 [INFO][4558] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:46.004 [INFO][4558] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:46.004 [INFO][4558] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" host="localhost" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:46.011 [INFO][4558] ipam.go 1685: Creating new handle: k8s-pod-network.fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988 Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:46.020 [INFO][4558] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" host="localhost" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:46.026 [INFO][4558] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" host="localhost" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:46.026 [INFO][4558] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" host="localhost" Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:46.026 [INFO][4558] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:46.066068 containerd[1455]: 2024-08-05 22:28:46.026 [INFO][4558] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" HandleID="k8s-pod-network.fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Workload="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:46.066987 containerd[1455]: 2024-08-05 22:28:46.034 [INFO][4521] k8s.go 386: Populated endpoint ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Namespace="kube-system" Pod="coredns-5dd5756b68-6cjgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--6cjgz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"822485eb-5c50-4b95-b82a-b13ac5143fc7", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 27, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-6cjgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib169e5c022b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:46.066987 containerd[1455]: 2024-08-05 22:28:46.034 [INFO][4521] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Namespace="kube-system" Pod="coredns-5dd5756b68-6cjgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:46.066987 containerd[1455]: 2024-08-05 22:28:46.034 [INFO][4521] dataplane_linux.go 68: Setting the host side veth name to calib169e5c022b ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Namespace="kube-system" Pod="coredns-5dd5756b68-6cjgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:46.066987 containerd[1455]: 2024-08-05 22:28:46.041 [INFO][4521] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Namespace="kube-system" Pod="coredns-5dd5756b68-6cjgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:46.066987 containerd[1455]: 2024-08-05 22:28:46.042 [INFO][4521] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Namespace="kube-system" Pod="coredns-5dd5756b68-6cjgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--6cjgz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"822485eb-5c50-4b95-b82a-b13ac5143fc7", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 27, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988", Pod:"coredns-5dd5756b68-6cjgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib169e5c022b", MAC:"da:9d:b4:10:a6:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:46.066987 containerd[1455]: 2024-08-05 22:28:46.055 [INFO][4521] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988" Namespace="kube-system" Pod="coredns-5dd5756b68-6cjgz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--6cjgz-eth0" Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:45.931 [WARNING][4568] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--tmsgz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7fe59236-5755-4505-80fa-7fe51da3d40d", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 27, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d", Pod:"coredns-5dd5756b68-tmsgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie68bbc65821", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:45.931 [INFO][4568] k8s.go 608: Cleaning up netns ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:45.931 [INFO][4568] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" iface="eth0" netns="" Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:45.931 [INFO][4568] k8s.go 615: Releasing IP address(es) ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:45.931 [INFO][4568] utils.go 188: Calico CNI releasing IP address ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:45.962 [INFO][4582] ipam_plugin.go 411: Releasing address using handleID ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" HandleID="k8s-pod-network.2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:45.962 [INFO][4582] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:46.026 [INFO][4582] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:46.032 [WARNING][4582] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" HandleID="k8s-pod-network.2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:46.032 [INFO][4582] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" HandleID="k8s-pod-network.2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:46.036 [INFO][4582] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:46.071892 containerd[1455]: 2024-08-05 22:28:46.058 [INFO][4568] k8s.go 621: Teardown processing complete. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:46.072476 containerd[1455]: time="2024-08-05T22:28:46.071936689Z" level=info msg="TearDown network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\" successfully" Aug 5 22:28:46.072476 containerd[1455]: time="2024-08-05T22:28:46.071967668Z" level=info msg="StopPodSandbox for \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\" returns successfully" Aug 5 22:28:46.072576 containerd[1455]: time="2024-08-05T22:28:46.072538023Z" level=info msg="RemovePodSandbox for \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\"" Aug 5 22:28:46.072618 containerd[1455]: time="2024-08-05T22:28:46.072575534Z" level=info msg="Forcibly stopping sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\"" Aug 5 22:28:46.107972 systemd[1]: Started cri-containerd-a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f.scope - libcontainer container a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f. Aug 5 22:28:46.126751 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:28:46.149366 containerd[1455]: time="2024-08-05T22:28:46.149240150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tp77v,Uid:31e0c4e3-71d6-44b3-8e8d-50979a20c140,Namespace:calico-system,Attempt:1,} returns sandbox id \"a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f\"" Aug 5 22:28:46.153333 containerd[1455]: time="2024-08-05T22:28:46.152988701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:28:46.153333 containerd[1455]: time="2024-08-05T22:28:46.153095776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:46.153333 containerd[1455]: time="2024-08-05T22:28:46.153140071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:28:46.153333 containerd[1455]: time="2024-08-05T22:28:46.153160881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:28:46.182131 systemd[1]: Started cri-containerd-fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988.scope - libcontainer container fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988. Aug 5 22:28:46.201941 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.177 [WARNING][4664] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--tmsgz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7fe59236-5755-4505-80fa-7fe51da3d40d", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 27, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e81e28a58092c1031a36f648dfa5644e01d1cf36105e5ab7eff141ded62c043d", Pod:"coredns-5dd5756b68-tmsgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie68bbc65821", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.178 [INFO][4664] k8s.go 608: Cleaning up netns ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.179 [INFO][4664] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" iface="eth0" netns="" Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.179 [INFO][4664] k8s.go 615: Releasing IP address(es) ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.179 [INFO][4664] utils.go 188: Calico CNI releasing IP address ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.210 [INFO][4712] ipam_plugin.go 411: Releasing address using handleID ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" HandleID="k8s-pod-network.2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.210 [INFO][4712] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.210 [INFO][4712] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.218 [WARNING][4712] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" HandleID="k8s-pod-network.2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.219 [INFO][4712] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" HandleID="k8s-pod-network.2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Workload="localhost-k8s-coredns--5dd5756b68--tmsgz-eth0" Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.221 [INFO][4712] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:46.237710 containerd[1455]: 2024-08-05 22:28:46.226 [INFO][4664] k8s.go 621: Teardown processing complete. ContainerID="2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6" Aug 5 22:28:46.237710 containerd[1455]: time="2024-08-05T22:28:46.237608216Z" level=info msg="TearDown network for sandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\" successfully" Aug 5 22:28:46.247589 containerd[1455]: time="2024-08-05T22:28:46.247406390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6cjgz,Uid:822485eb-5c50-4b95-b82a-b13ac5143fc7,Namespace:kube-system,Attempt:1,} returns sandbox id \"fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988\"" Aug 5 22:28:46.249430 kubelet[2560]: E0805 22:28:46.249388 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:46.251864 containerd[1455]: time="2024-08-05T22:28:46.251795499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:28:46.251965 containerd[1455]: time="2024-08-05T22:28:46.251879190Z" level=info msg="RemovePodSandbox \"2d6f1a0ffb2155910b344c205c0ab366f210328375079ade382cae0b04eb1cb6\" returns successfully" Aug 5 22:28:46.252960 containerd[1455]: time="2024-08-05T22:28:46.252812610Z" level=info msg="StopPodSandbox for \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\"" Aug 5 22:28:46.253632 containerd[1455]: time="2024-08-05T22:28:46.253604209Z" level=info msg="CreateContainer within sandbox \"fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:28:46.270055 containerd[1455]: time="2024-08-05T22:28:46.269447198Z" level=info msg="CreateContainer within sandbox \"fbd9678055c366207b58f314837469de5de6d11bef882c3fe2a2beb5de6d1988\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e42a8dd8f8ffb7fcdbadee2cd6a3f70866cd6c568de1413351bd7f84d894b6f\"" Aug 5 22:28:46.271182 containerd[1455]: time="2024-08-05T22:28:46.271154153Z" level=info msg="StartContainer for \"8e42a8dd8f8ffb7fcdbadee2cd6a3f70866cd6c568de1413351bd7f84d894b6f\"" Aug 5 22:28:46.315143 systemd[1]: Started cri-containerd-8e42a8dd8f8ffb7fcdbadee2cd6a3f70866cd6c568de1413351bd7f84d894b6f.scope - libcontainer container 8e42a8dd8f8ffb7fcdbadee2cd6a3f70866cd6c568de1413351bd7f84d894b6f. Aug 5 22:28:46.360537 containerd[1455]: time="2024-08-05T22:28:46.360416253Z" level=info msg="StartContainer for \"8e42a8dd8f8ffb7fcdbadee2cd6a3f70866cd6c568de1413351bd7f84d894b6f\" returns successfully" Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.328 [WARNING][4751] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0", GenerateName:"calico-kube-controllers-77d4c44755-", Namespace:"calico-system", SelfLink:"", UID:"741e867c-2688-4bc5-8045-f058e5990eb4", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 28, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d4c44755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c", Pod:"calico-kube-controllers-77d4c44755-wrv2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f2a01c1cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.328 [INFO][4751] k8s.go 608: Cleaning up netns ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.328 [INFO][4751] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" iface="eth0" netns="" Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.328 [INFO][4751] k8s.go 615: Releasing IP address(es) ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.328 [INFO][4751] utils.go 188: Calico CNI releasing IP address ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.372 [INFO][4783] ipam_plugin.go 411: Releasing address using handleID ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" HandleID="k8s-pod-network.ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.373 [INFO][4783] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.373 [INFO][4783] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.380 [WARNING][4783] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" HandleID="k8s-pod-network.ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.380 [INFO][4783] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" HandleID="k8s-pod-network.ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.383 [INFO][4783] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:46.392895 containerd[1455]: 2024-08-05 22:28:46.385 [INFO][4751] k8s.go 621: Teardown processing complete. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:46.393354 containerd[1455]: time="2024-08-05T22:28:46.392964506Z" level=info msg="TearDown network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\" successfully" Aug 5 22:28:46.393354 containerd[1455]: time="2024-08-05T22:28:46.393000544Z" level=info msg="StopPodSandbox for \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\" returns successfully" Aug 5 22:28:46.393535 containerd[1455]: time="2024-08-05T22:28:46.393489733Z" level=info msg="RemovePodSandbox for \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\"" Aug 5 22:28:46.393568 containerd[1455]: time="2024-08-05T22:28:46.393531954Z" level=info msg="Forcibly stopping sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\"" Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.442 [WARNING][4818] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0", GenerateName:"calico-kube-controllers-77d4c44755-", Namespace:"calico-system", SelfLink:"", UID:"741e867c-2688-4bc5-8045-f058e5990eb4", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 28, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d4c44755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c", Pod:"calico-kube-controllers-77d4c44755-wrv2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f2a01c1cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.442 [INFO][4818] k8s.go 608: Cleaning up netns ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.442 [INFO][4818] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" iface="eth0" netns="" Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.442 [INFO][4818] k8s.go 615: Releasing IP address(es) ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.442 [INFO][4818] utils.go 188: Calico CNI releasing IP address ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.469 [INFO][4825] ipam_plugin.go 411: Releasing address using handleID ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" HandleID="k8s-pod-network.ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.469 [INFO][4825] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.469 [INFO][4825] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.476 [WARNING][4825] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" HandleID="k8s-pod-network.ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.476 [INFO][4825] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" HandleID="k8s-pod-network.ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Workload="localhost-k8s-calico--kube--controllers--77d4c44755--wrv2h-eth0" Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.477 [INFO][4825] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:28:46.484151 containerd[1455]: 2024-08-05 22:28:46.480 [INFO][4818] k8s.go 621: Teardown processing complete. ContainerID="ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021" Aug 5 22:28:46.484782 containerd[1455]: time="2024-08-05T22:28:46.484186936Z" level=info msg="TearDown network for sandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\" successfully" Aug 5 22:28:46.517038 containerd[1455]: time="2024-08-05T22:28:46.516862203Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:28:46.517038 containerd[1455]: time="2024-08-05T22:28:46.516948528Z" level=info msg="RemovePodSandbox \"ee1aaf5b1a47dd640150f9b722d225ae5db7259468dd9d42df86ea388175b021\" returns successfully" Aug 5 22:28:46.650719 containerd[1455]: time="2024-08-05T22:28:46.650605145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:46.651442 containerd[1455]: time="2024-08-05T22:28:46.651383057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:28:46.653306 containerd[1455]: time="2024-08-05T22:28:46.653236262Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:46.655424 containerd[1455]: time="2024-08-05T22:28:46.655362220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:46.656038 containerd[1455]: time="2024-08-05T22:28:46.655981748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.130599219s" Aug 5 22:28:46.656038 containerd[1455]: time="2024-08-05T22:28:46.656018218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:28:46.656786 containerd[1455]: time="2024-08-05T22:28:46.656574545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:28:46.664660 containerd[1455]: time="2024-08-05T22:28:46.664617473Z" level=info msg="CreateContainer within sandbox \"3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:28:46.680552 containerd[1455]: time="2024-08-05T22:28:46.680471434Z" level=info msg="CreateContainer within sandbox \"3429a7aa2af51bb81295c21ef08b3a2ba5e644c3ca38ed539409d754ee22377c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a6d9a686289e2a6c2328f5069be4c2ad327380bd80e0fa00777a93936ab765b2\"" Aug 5 22:28:46.681128 containerd[1455]: time="2024-08-05T22:28:46.681084479Z" level=info msg="StartContainer for \"a6d9a686289e2a6c2328f5069be4c2ad327380bd80e0fa00777a93936ab765b2\"" Aug 5 22:28:46.719989 systemd[1]: Started cri-containerd-a6d9a686289e2a6c2328f5069be4c2ad327380bd80e0fa00777a93936ab765b2.scope - libcontainer container a6d9a686289e2a6c2328f5069be4c2ad327380bd80e0fa00777a93936ab765b2. Aug 5 22:28:46.772399 containerd[1455]: time="2024-08-05T22:28:46.772232407Z" level=info msg="StartContainer for \"a6d9a686289e2a6c2328f5069be4c2ad327380bd80e0fa00777a93936ab765b2\" returns successfully" Aug 5 22:28:46.915468 kubelet[2560]: E0805 22:28:46.915421 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:46.924329 kubelet[2560]: E0805 22:28:46.924166 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:46.928336 kubelet[2560]: I0805 22:28:46.928271 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77d4c44755-wrv2h" podStartSLOduration=37.796825143 podCreationTimestamp="2024-08-05 22:28:06 +0000 UTC" firstStartedPulling="2024-08-05 22:28:43.525051454 +0000 UTC m=+57.941729708" lastFinishedPulling="2024-08-05 22:28:46.656344674 +0000 UTC m=+61.073022938" observedRunningTime="2024-08-05 22:28:46.926421899 +0000 UTC m=+61.343100153" watchObservedRunningTime="2024-08-05 22:28:46.928118373 +0000 UTC m=+61.344796627" Aug 5 22:28:47.127233 kubelet[2560]: I0805 22:28:47.126745 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6cjgz" podStartSLOduration=48.126700437 podCreationTimestamp="2024-08-05 22:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:28:46.938405025 +0000 UTC m=+61.355083279" watchObservedRunningTime="2024-08-05 22:28:47.126700437 +0000 UTC m=+61.543378691" Aug 5 22:28:47.224163 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:60732.service - OpenSSH per-connection server daemon (10.0.0.1:60732). Aug 5 22:28:47.293832 sshd[4894]: Accepted publickey for core from 10.0.0.1 port 60732 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:47.295815 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:47.300975 systemd-logind[1445]: New session 15 of user core. Aug 5 22:28:47.311927 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:28:47.455566 sshd[4894]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:47.461345 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:60732.service: Deactivated successfully. Aug 5 22:28:47.464991 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:28:47.466115 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:28:47.467244 systemd-logind[1445]: Removed session 15. Aug 5 22:28:47.505244 systemd-networkd[1399]: cali58d6d17ebc3: Gained IPv6LL Aug 5 22:28:47.697097 systemd-networkd[1399]: calib169e5c022b: Gained IPv6LL Aug 5 22:28:47.927875 kubelet[2560]: E0805 22:28:47.927835 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:48.454866 containerd[1455]: time="2024-08-05T22:28:48.454804511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:48.475296 containerd[1455]: time="2024-08-05T22:28:48.475196842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:28:48.550213 containerd[1455]: time="2024-08-05T22:28:48.550097569Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:48.587169 containerd[1455]: time="2024-08-05T22:28:48.587083022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:48.588163 containerd[1455]: time="2024-08-05T22:28:48.588101264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.931493785s" Aug 5 22:28:48.588163 containerd[1455]: time="2024-08-05T22:28:48.588142753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:28:48.590130 containerd[1455]: time="2024-08-05T22:28:48.590053163Z" level=info msg="CreateContainer within sandbox \"a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:28:48.930036 kubelet[2560]: E0805 22:28:48.930008 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:28:49.043822 containerd[1455]: time="2024-08-05T22:28:49.043750618Z" level=info msg="CreateContainer within sandbox \"a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5eb325b009e79ea44bd007855a26c9008b315ffb14dbbd1ea3cad9a1921d65f5\"" Aug 5 22:28:49.044761 containerd[1455]: time="2024-08-05T22:28:49.044697561Z" level=info msg="StartContainer for \"5eb325b009e79ea44bd007855a26c9008b315ffb14dbbd1ea3cad9a1921d65f5\"" Aug 5 22:28:49.079433 systemd[1]: run-containerd-runc-k8s.io-5eb325b009e79ea44bd007855a26c9008b315ffb14dbbd1ea3cad9a1921d65f5-runc.lVy4rE.mount: Deactivated successfully. Aug 5 22:28:49.087310 systemd[1]: Started cri-containerd-5eb325b009e79ea44bd007855a26c9008b315ffb14dbbd1ea3cad9a1921d65f5.scope - libcontainer container 5eb325b009e79ea44bd007855a26c9008b315ffb14dbbd1ea3cad9a1921d65f5. Aug 5 22:28:49.133286 containerd[1455]: time="2024-08-05T22:28:49.133230705Z" level=info msg="StartContainer for \"5eb325b009e79ea44bd007855a26c9008b315ffb14dbbd1ea3cad9a1921d65f5\" returns successfully" Aug 5 22:28:49.135264 containerd[1455]: time="2024-08-05T22:28:49.134774992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:28:51.930309 containerd[1455]: time="2024-08-05T22:28:51.930224025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:51.934145 containerd[1455]: time="2024-08-05T22:28:51.934097816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:28:51.936130 containerd[1455]: time="2024-08-05T22:28:51.936097030Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:51.939434 containerd[1455]: time="2024-08-05T22:28:51.939384568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:28:51.940316 containerd[1455]: time="2024-08-05T22:28:51.940281363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.80546382s" Aug 5 22:28:51.940505 containerd[1455]: time="2024-08-05T22:28:51.940318485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:28:51.942479 containerd[1455]: time="2024-08-05T22:28:51.941776373Z" level=info msg="CreateContainer within sandbox \"a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:28:51.971360 containerd[1455]: time="2024-08-05T22:28:51.971273118Z" level=info msg="CreateContainer within sandbox \"a4d761186d7314d4fa95e8a14e58e55cbf36500ce69b1893046605c96a6b7f8f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e4770f0fde051658cd7cc337d9d49cd703151f1947b0f1c7506d6fc2d6ba9e41\"" Aug 5 22:28:51.971928 containerd[1455]: time="2024-08-05T22:28:51.971866363Z" level=info msg="StartContainer for \"e4770f0fde051658cd7cc337d9d49cd703151f1947b0f1c7506d6fc2d6ba9e41\"" Aug 5 22:28:52.010063 systemd[1]: Started cri-containerd-e4770f0fde051658cd7cc337d9d49cd703151f1947b0f1c7506d6fc2d6ba9e41.scope - libcontainer container e4770f0fde051658cd7cc337d9d49cd703151f1947b0f1c7506d6fc2d6ba9e41. Aug 5 22:28:52.046519 containerd[1455]: time="2024-08-05T22:28:52.046468196Z" level=info msg="StartContainer for \"e4770f0fde051658cd7cc337d9d49cd703151f1947b0f1c7506d6fc2d6ba9e41\" returns successfully" Aug 5 22:28:52.469637 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:59806.service - OpenSSH per-connection server daemon (10.0.0.1:59806). Aug 5 22:28:52.517676 sshd[5005]: Accepted publickey for core from 10.0.0.1 port 59806 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:52.519694 sshd[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:52.525781 systemd-logind[1445]: New session 16 of user core. Aug 5 22:28:52.532852 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:28:52.679615 sshd[5005]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:52.685737 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:59806.service: Deactivated successfully. Aug 5 22:28:52.688233 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:28:52.689150 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:28:52.690410 systemd-logind[1445]: Removed session 16. Aug 5 22:28:52.825862 kubelet[2560]: I0805 22:28:52.825718 2560 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:28:52.825862 kubelet[2560]: I0805 22:28:52.825759 2560 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:28:52.957216 kubelet[2560]: I0805 22:28:52.957147 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-tp77v" podStartSLOduration=41.167085783 podCreationTimestamp="2024-08-05 22:28:06 +0000 UTC" firstStartedPulling="2024-08-05 22:28:46.1504811 +0000 UTC m=+60.567159354" lastFinishedPulling="2024-08-05 22:28:51.9404781 +0000 UTC m=+66.357156364" observedRunningTime="2024-08-05 22:28:52.95678844 +0000 UTC m=+67.373466694" watchObservedRunningTime="2024-08-05 22:28:52.957082793 +0000 UTC m=+67.373761077" Aug 5 22:28:57.692375 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:59814.service - OpenSSH per-connection server daemon (10.0.0.1:59814). Aug 5 22:28:57.730372 sshd[5023]: Accepted publickey for core from 10.0.0.1 port 59814 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:28:57.732068 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:28:57.736279 systemd-logind[1445]: New session 17 of user core. Aug 5 22:28:57.743831 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:28:57.861903 sshd[5023]: pam_unix(sshd:session): session closed for user core Aug 5 22:28:57.866283 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:59814.service: Deactivated successfully. Aug 5 22:28:57.868595 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:28:57.869377 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:28:57.870319 systemd-logind[1445]: Removed session 17. Aug 5 22:28:59.701557 kubelet[2560]: E0805 22:28:59.701511 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:29:02.878723 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:33638.service - OpenSSH per-connection server daemon (10.0.0.1:33638). Aug 5 22:29:03.051238 sshd[5071]: Accepted publickey for core from 10.0.0.1 port 33638 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:03.053948 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:03.061011 systemd-logind[1445]: New session 18 of user core. Aug 5 22:29:03.078183 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:29:03.206170 sshd[5071]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:03.213035 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:33638.service: Deactivated successfully. Aug 5 22:29:03.216331 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:29:03.217358 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:29:03.218766 systemd-logind[1445]: Removed session 18. Aug 5 22:29:06.705013 kubelet[2560]: E0805 22:29:06.703262 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:29:08.242922 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:33650.service - OpenSSH per-connection server daemon (10.0.0.1:33650). Aug 5 22:29:08.423736 sshd[5088]: Accepted publickey for core from 10.0.0.1 port 33650 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:08.422832 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:08.454357 systemd-logind[1445]: New session 19 of user core. Aug 5 22:29:08.461193 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:29:08.768112 sshd[5088]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:08.794856 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:33650.service: Deactivated successfully. Aug 5 22:29:08.799662 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:29:08.808887 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:29:08.822356 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:33654.service - OpenSSH per-connection server daemon (10.0.0.1:33654). Aug 5 22:29:08.824544 systemd-logind[1445]: Removed session 19. Aug 5 22:29:08.901533 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 33654 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:08.912567 sshd[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:08.928173 systemd-logind[1445]: New session 20 of user core. Aug 5 22:29:08.940338 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:29:09.701976 kubelet[2560]: E0805 22:29:09.701412 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:29:09.819876 sshd[5103]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:09.831114 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:33654.service: Deactivated successfully. Aug 5 22:29:09.834360 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:29:09.835580 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:29:09.846856 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:33662.service - OpenSSH per-connection server daemon (10.0.0.1:33662). Aug 5 22:29:09.849392 systemd-logind[1445]: Removed session 20. Aug 5 22:29:09.921210 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 33662 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:09.923540 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:09.931619 systemd-logind[1445]: New session 21 of user core. Aug 5 22:29:09.940076 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:29:10.066271 kubelet[2560]: I0805 22:29:10.065632 2560 topology_manager.go:215] "Topology Admit Handler" podUID="3af2fd20-f8bf-49f6-b71a-373be72a95aa" podNamespace="calico-apiserver" podName="calico-apiserver-67878f444d-zrngf" Aug 5 22:29:10.081534 systemd[1]: Created slice kubepods-besteffort-pod3af2fd20_f8bf_49f6_b71a_373be72a95aa.slice - libcontainer container kubepods-besteffort-pod3af2fd20_f8bf_49f6_b71a_373be72a95aa.slice. Aug 5 22:29:10.175719 kubelet[2560]: I0805 22:29:10.174938 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3af2fd20-f8bf-49f6-b71a-373be72a95aa-calico-apiserver-certs\") pod \"calico-apiserver-67878f444d-zrngf\" (UID: \"3af2fd20-f8bf-49f6-b71a-373be72a95aa\") " pod="calico-apiserver/calico-apiserver-67878f444d-zrngf" Aug 5 22:29:10.175719 kubelet[2560]: I0805 22:29:10.174991 2560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m429\" (UniqueName: \"kubernetes.io/projected/3af2fd20-f8bf-49f6-b71a-373be72a95aa-kube-api-access-8m429\") pod \"calico-apiserver-67878f444d-zrngf\" (UID: \"3af2fd20-f8bf-49f6-b71a-373be72a95aa\") " pod="calico-apiserver/calico-apiserver-67878f444d-zrngf" Aug 5 22:29:10.386935 containerd[1455]: time="2024-08-05T22:29:10.386779095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67878f444d-zrngf,Uid:3af2fd20-f8bf-49f6-b71a-373be72a95aa,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:29:10.555834 systemd-networkd[1399]: calibacd8b482b6: Link UP Aug 5 22:29:10.561035 systemd-networkd[1399]: calibacd8b482b6: Gained carrier Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.450 [INFO][5132] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0 calico-apiserver-67878f444d- calico-apiserver 3af2fd20-f8bf-49f6-b71a-373be72a95aa 1082 0 2024-08-05 22:29:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67878f444d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67878f444d-zrngf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibacd8b482b6 [] []}} ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Namespace="calico-apiserver" Pod="calico-apiserver-67878f444d-zrngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--67878f444d--zrngf-" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.450 [INFO][5132] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Namespace="calico-apiserver" Pod="calico-apiserver-67878f444d-zrngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.488 [INFO][5146] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" HandleID="k8s-pod-network.276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Workload="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.497 [INFO][5146] ipam_plugin.go 264: Auto assigning IP ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" HandleID="k8s-pod-network.276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Workload="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd2e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67878f444d-zrngf", "timestamp":"2024-08-05 22:29:10.488880031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.498 [INFO][5146] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.498 [INFO][5146] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.498 [INFO][5146] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.501 [INFO][5146] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" host="localhost" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.507 [INFO][5146] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.514 [INFO][5146] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.518 [INFO][5146] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.525 [INFO][5146] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.525 [INFO][5146] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" host="localhost" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.529 [INFO][5146] ipam.go 1685: Creating new handle: k8s-pod-network.276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8 Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.537 [INFO][5146] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" host="localhost" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.544 [INFO][5146] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" host="localhost" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.544 [INFO][5146] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" host="localhost" Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.544 [INFO][5146] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:29:10.576546 containerd[1455]: 2024-08-05 22:29:10.544 [INFO][5146] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" HandleID="k8s-pod-network.276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Workload="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" Aug 5 22:29:10.581329 containerd[1455]: 2024-08-05 22:29:10.548 [INFO][5132] k8s.go 386: Populated endpoint ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Namespace="calico-apiserver" Pod="calico-apiserver-67878f444d-zrngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0", GenerateName:"calico-apiserver-67878f444d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3af2fd20-f8bf-49f6-b71a-373be72a95aa", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67878f444d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67878f444d-zrngf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibacd8b482b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:29:10.581329 containerd[1455]: 2024-08-05 22:29:10.548 [INFO][5132] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Namespace="calico-apiserver" Pod="calico-apiserver-67878f444d-zrngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" Aug 5 22:29:10.581329 containerd[1455]: 2024-08-05 22:29:10.548 [INFO][5132] dataplane_linux.go 68: Setting the host side veth name to calibacd8b482b6 ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Namespace="calico-apiserver" Pod="calico-apiserver-67878f444d-zrngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" Aug 5 22:29:10.581329 containerd[1455]: 2024-08-05 22:29:10.557 [INFO][5132] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Namespace="calico-apiserver" Pod="calico-apiserver-67878f444d-zrngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" Aug 5 22:29:10.581329 containerd[1455]: 2024-08-05 22:29:10.559 [INFO][5132] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Namespace="calico-apiserver" Pod="calico-apiserver-67878f444d-zrngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0", GenerateName:"calico-apiserver-67878f444d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3af2fd20-f8bf-49f6-b71a-373be72a95aa", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67878f444d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8", Pod:"calico-apiserver-67878f444d-zrngf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibacd8b482b6", MAC:"ca:25:e7:84:e7:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:29:10.581329 containerd[1455]: 2024-08-05 22:29:10.567 [INFO][5132] k8s.go 500: Wrote updated endpoint to datastore ContainerID="276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8" Namespace="calico-apiserver" Pod="calico-apiserver-67878f444d-zrngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--67878f444d--zrngf-eth0" Aug 5 22:29:10.617502 containerd[1455]: time="2024-08-05T22:29:10.616382918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:29:10.617502 containerd[1455]: time="2024-08-05T22:29:10.616460025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:29:10.617502 containerd[1455]: time="2024-08-05T22:29:10.616479872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:29:10.617502 containerd[1455]: time="2024-08-05T22:29:10.616494350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:29:10.661888 systemd[1]: Started cri-containerd-276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8.scope - libcontainer container 276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8. Aug 5 22:29:10.680407 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:29:10.709124 containerd[1455]: time="2024-08-05T22:29:10.709083022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67878f444d-zrngf,Uid:3af2fd20-f8bf-49f6-b71a-373be72a95aa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8\"" Aug 5 22:29:10.710735 containerd[1455]: time="2024-08-05T22:29:10.710635671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:29:11.010982 sshd[5117]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:11.024865 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:33662.service: Deactivated successfully. Aug 5 22:29:11.027429 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:29:11.029378 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:29:11.042078 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:33904.service - OpenSSH per-connection server daemon (10.0.0.1:33904). Aug 5 22:29:11.046278 systemd-logind[1445]: Removed session 21. Aug 5 22:29:11.083067 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 33904 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:11.085259 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:11.093851 systemd-logind[1445]: New session 22 of user core. Aug 5 22:29:11.108026 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:29:11.484486 sshd[5220]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:11.505092 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:33916.service - OpenSSH per-connection server daemon (10.0.0.1:33916). Aug 5 22:29:11.505751 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:33904.service: Deactivated successfully. Aug 5 22:29:11.510496 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:29:11.514965 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:29:11.516507 systemd-logind[1445]: Removed session 22. Aug 5 22:29:11.554533 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 33916 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:11.556735 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:11.562823 systemd-logind[1445]: New session 23 of user core. Aug 5 22:29:11.572886 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:29:11.702802 kubelet[2560]: E0805 22:29:11.701046 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:29:11.800945 sshd[5233]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:11.806961 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:33916.service: Deactivated successfully. Aug 5 22:29:11.809997 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:29:11.812064 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:29:11.813155 systemd-logind[1445]: Removed session 23. Aug 5 22:29:11.894957 systemd-networkd[1399]: calibacd8b482b6: Gained IPv6LL Aug 5 22:29:14.741383 containerd[1455]: time="2024-08-05T22:29:14.741255208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:29:14.743834 containerd[1455]: time="2024-08-05T22:29:14.743754238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:29:14.748859 containerd[1455]: time="2024-08-05T22:29:14.747307500Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:29:14.768100 containerd[1455]: time="2024-08-05T22:29:14.763118314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:29:14.768100 containerd[1455]: time="2024-08-05T22:29:14.764313441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.053628036s" Aug 5 22:29:14.768100 containerd[1455]: time="2024-08-05T22:29:14.764349679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:29:14.772111 containerd[1455]: time="2024-08-05T22:29:14.771722356Z" level=info msg="CreateContainer within sandbox \"276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:29:14.827589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2463195149.mount: Deactivated successfully. Aug 5 22:29:14.847589 containerd[1455]: time="2024-08-05T22:29:14.847454687Z" level=info msg="CreateContainer within sandbox \"276d24440fdfc6de1da4da43ebf10d69a49f471836b3d4583ed979cb10681ac8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"91231ecc00b1190feeee5b16132608081a45aaa5163a1fb92f1dbd3361010b57\"" Aug 5 22:29:14.852499 containerd[1455]: time="2024-08-05T22:29:14.850373633Z" level=info msg="StartContainer for \"91231ecc00b1190feeee5b16132608081a45aaa5163a1fb92f1dbd3361010b57\"" Aug 5 22:29:14.924729 systemd[1]: Started cri-containerd-91231ecc00b1190feeee5b16132608081a45aaa5163a1fb92f1dbd3361010b57.scope - libcontainer container 91231ecc00b1190feeee5b16132608081a45aaa5163a1fb92f1dbd3361010b57. Aug 5 22:29:15.051957 containerd[1455]: time="2024-08-05T22:29:15.051786013Z" level=info msg="StartContainer for \"91231ecc00b1190feeee5b16132608081a45aaa5163a1fb92f1dbd3361010b57\" returns successfully" Aug 5 22:29:16.109864 kubelet[2560]: I0805 22:29:16.109802 2560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67878f444d-zrngf" podStartSLOduration=2.055083787 podCreationTimestamp="2024-08-05 22:29:10 +0000 UTC" firstStartedPulling="2024-08-05 22:29:10.710406406 +0000 UTC m=+85.127084660" lastFinishedPulling="2024-08-05 22:29:14.765076799 +0000 UTC m=+89.181755053" observedRunningTime="2024-08-05 22:29:16.109088448 +0000 UTC m=+90.525766702" watchObservedRunningTime="2024-08-05 22:29:16.10975418 +0000 UTC m=+90.526432444" Aug 5 22:29:16.820134 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:33926.service - OpenSSH per-connection server daemon (10.0.0.1:33926). Aug 5 22:29:16.865770 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 33926 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:16.868291 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:16.873427 systemd-logind[1445]: New session 24 of user core. Aug 5 22:29:16.878988 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:29:16.995947 sshd[5333]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:17.000525 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:33926.service: Deactivated successfully. Aug 5 22:29:17.002641 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:29:17.003380 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:29:17.004433 systemd-logind[1445]: Removed session 24. Aug 5 22:29:22.011007 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:47146.service - OpenSSH per-connection server daemon (10.0.0.1:47146). Aug 5 22:29:22.051911 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 47146 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:22.053778 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:22.059213 systemd-logind[1445]: New session 25 of user core. Aug 5 22:29:22.073874 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:29:22.196371 sshd[5378]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:22.202018 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:47146.service: Deactivated successfully. Aug 5 22:29:22.204608 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:29:22.205844 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:29:22.207998 systemd-logind[1445]: Removed session 25. Aug 5 22:29:27.215424 systemd[1]: Started sshd@26-10.0.0.26:22-10.0.0.1:47158.service - OpenSSH per-connection server daemon (10.0.0.1:47158). Aug 5 22:29:27.255715 sshd[5400]: Accepted publickey for core from 10.0.0.1 port 47158 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:27.257666 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:27.262821 systemd-logind[1445]: New session 26 of user core. Aug 5 22:29:27.277011 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:29:27.404466 sshd[5400]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:27.409541 systemd[1]: sshd@26-10.0.0.26:22-10.0.0.1:47158.service: Deactivated successfully. Aug 5 22:29:27.412333 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:29:27.413232 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:29:27.414347 systemd-logind[1445]: Removed session 26. Aug 5 22:29:32.416260 systemd[1]: Started sshd@27-10.0.0.26:22-10.0.0.1:53728.service - OpenSSH per-connection server daemon (10.0.0.1:53728). Aug 5 22:29:32.512906 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 53728 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:32.514965 sshd[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:32.520247 systemd-logind[1445]: New session 27 of user core. Aug 5 22:29:32.532854 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 22:29:32.678036 sshd[5440]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:32.683998 systemd[1]: sshd@27-10.0.0.26:22-10.0.0.1:53728.service: Deactivated successfully. Aug 5 22:29:32.686946 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 22:29:32.687775 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Aug 5 22:29:32.688879 systemd-logind[1445]: Removed session 27. Aug 5 22:29:37.694092 systemd[1]: Started sshd@28-10.0.0.26:22-10.0.0.1:53736.service - OpenSSH per-connection server daemon (10.0.0.1:53736). Aug 5 22:29:37.743802 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 53736 ssh2: RSA SHA256:trmmO/f8jH66MBVsEkMen/GIeN/rF8ZIiIhZ9EnhNYI Aug 5 22:29:37.745860 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:29:37.751229 systemd-logind[1445]: New session 28 of user core. Aug 5 22:29:37.761986 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 22:29:37.886914 sshd[5459]: pam_unix(sshd:session): session closed for user core Aug 5 22:29:37.894389 systemd[1]: sshd@28-10.0.0.26:22-10.0.0.1:53736.service: Deactivated successfully. Aug 5 22:29:37.896806 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 22:29:37.897575 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Aug 5 22:29:37.898574 systemd-logind[1445]: Removed session 28.