Oct 9 07:22:01.936645 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:22:01.936680 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:22:01.936694 kernel: BIOS-provided physical RAM map: Oct 9 07:22:01.936703 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 07:22:01.936711 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 07:22:01.936719 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 07:22:01.936734 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 07:22:01.936747 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 07:22:01.936762 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 07:22:01.936777 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 07:22:01.936806 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 9 07:22:01.936826 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Oct 9 07:22:01.936846 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Oct 9 07:22:01.936865 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Oct 9 07:22:01.936888 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 07:22:01.936917 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 07:22:01.936927 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 07:22:01.936936 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 07:22:01.936945 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 07:22:01.936954 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 07:22:01.936963 kernel: NX (Execute Disable) protection: active Oct 9 07:22:01.936972 kernel: APIC: Static calls initialized Oct 9 07:22:01.936981 kernel: efi: EFI v2.7 by EDK II Oct 9 07:22:01.936991 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Oct 9 07:22:01.937000 kernel: SMBIOS 2.8 present. Oct 9 07:22:01.937010 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 9 07:22:01.937023 kernel: Hypervisor detected: KVM Oct 9 07:22:01.937032 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:22:01.937058 kernel: kvm-clock: using sched offset of 4910520421 cycles Oct 9 07:22:01.937068 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:22:01.937078 kernel: tsc: Detected 2794.748 MHz processor Oct 9 07:22:01.937089 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:22:01.937099 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:22:01.937108 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 9 07:22:01.937118 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 9 07:22:01.937128 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:22:01.937143 kernel: Using GB pages for direct mapping Oct 9 07:22:01.937164 kernel: Secure boot disabled Oct 9 07:22:01.937183 kernel: ACPI: Early table checksum verification disabled Oct 9 07:22:01.937195 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 9 07:22:01.937215 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 9 07:22:01.937225 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:22:01.937236 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:22:01.937249 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 9 07:22:01.937260 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:22:01.937277 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:22:01.937288 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:22:01.937298 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:22:01.937308 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 9 07:22:01.937318 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 9 07:22:01.937333 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 9 07:22:01.937342 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 9 07:22:01.937352 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 9 07:22:01.937363 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 9 07:22:01.937373 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 9 07:22:01.937383 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 9 07:22:01.937393 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 9 07:22:01.937407 kernel: No NUMA configuration found Oct 9 07:22:01.937417 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 9 07:22:01.937431 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 9 07:22:01.937442 kernel: Zone ranges: Oct 9 07:22:01.937452 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:22:01.937462 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 9 07:22:01.937472 kernel: Normal empty Oct 9 07:22:01.937482 kernel: Movable zone start for each node Oct 9 07:22:01.937492 kernel: Early memory node ranges Oct 9 07:22:01.937502 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 9 07:22:01.937512 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 9 07:22:01.937522 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 9 07:22:01.937536 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 9 07:22:01.937546 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 9 07:22:01.937556 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 9 07:22:01.937566 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 9 07:22:01.937580 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:22:01.937590 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 9 07:22:01.937600 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 9 07:22:01.937610 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:22:01.937620 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 9 07:22:01.937634 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 9 07:22:01.937644 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 9 07:22:01.937654 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:22:01.937665 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:22:01.937675 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:22:01.937685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:22:01.937695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:22:01.937705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:22:01.937715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:22:01.937729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:22:01.937739 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:22:01.937749 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:22:01.937759 kernel: TSC deadline timer available Oct 9 07:22:01.937770 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 07:22:01.937780 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:22:01.937790 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 07:22:01.937800 kernel: kvm-guest: setup PV sched yield Oct 9 07:22:01.937810 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 9 07:22:01.937824 kernel: Booting paravirtualized kernel on KVM Oct 9 07:22:01.937834 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:22:01.937844 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 07:22:01.937854 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 07:22:01.937865 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 07:22:01.937875 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 07:22:01.937885 kernel: kvm-guest: PV spinlocks enabled Oct 9 07:22:01.937895 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 07:22:01.937910 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:22:01.937925 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:22:01.937936 kernel: random: crng init done Oct 9 07:22:01.937945 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 07:22:01.937956 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:22:01.937966 kernel: Fallback order for Node 0: 0 Oct 9 07:22:01.937976 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 9 07:22:01.937986 kernel: Policy zone: DMA32 Oct 9 07:22:01.937996 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:22:01.938010 kernel: Memory: 2389472K/2567000K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 177268K reserved, 0K cma-reserved) Oct 9 07:22:01.938021 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 07:22:01.938031 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:22:01.938057 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:22:01.938067 kernel: Dynamic Preempt: voluntary Oct 9 07:22:01.938089 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:22:01.938104 kernel: rcu: RCU event tracing is enabled. Oct 9 07:22:01.938115 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 07:22:01.938125 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:22:01.938136 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:22:01.938147 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:22:01.938157 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:22:01.938172 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 07:22:01.938183 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 07:22:01.938193 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:22:01.938204 kernel: Console: colour dummy device 80x25 Oct 9 07:22:01.938219 kernel: printk: console [ttyS0] enabled Oct 9 07:22:01.938233 kernel: ACPI: Core revision 20230628 Oct 9 07:22:01.938244 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:22:01.938255 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:22:01.938266 kernel: x2apic enabled Oct 9 07:22:01.938285 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:22:01.938296 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 07:22:01.938307 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 07:22:01.938318 kernel: kvm-guest: setup PV IPIs Oct 9 07:22:01.938328 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:22:01.938343 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 07:22:01.938354 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 9 07:22:01.938365 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 07:22:01.938375 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 07:22:01.938386 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 07:22:01.938397 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:22:01.938407 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:22:01.938418 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:22:01.938429 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:22:01.938444 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 07:22:01.938455 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 07:22:01.938470 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:22:01.938481 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:22:01.938491 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 07:22:01.938503 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 07:22:01.938514 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 07:22:01.938537 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:22:01.938553 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:22:01.938564 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:22:01.938574 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:22:01.938586 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 07:22:01.938597 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:22:01.938607 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:22:01.938618 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:22:01.938629 kernel: SELinux: Initializing. Oct 9 07:22:01.938639 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 07:22:01.938654 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 07:22:01.938666 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 07:22:01.938676 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:22:01.938687 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:22:01.938698 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:22:01.938708 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 07:22:01.938719 kernel: ... version: 0 Oct 9 07:22:01.938729 kernel: ... bit width: 48 Oct 9 07:22:01.938740 kernel: ... generic registers: 6 Oct 9 07:22:01.938755 kernel: ... value mask: 0000ffffffffffff Oct 9 07:22:01.938766 kernel: ... max period: 00007fffffffffff Oct 9 07:22:01.938778 kernel: ... fixed-purpose events: 0 Oct 9 07:22:01.938790 kernel: ... event mask: 000000000000003f Oct 9 07:22:01.938803 kernel: signal: max sigframe size: 1776 Oct 9 07:22:01.938813 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:22:01.938825 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:22:01.938835 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:22:01.938846 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:22:01.938860 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 07:22:01.938871 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 07:22:01.938882 kernel: smpboot: Max logical packages: 1 Oct 9 07:22:01.938892 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 9 07:22:01.938903 kernel: devtmpfs: initialized Oct 9 07:22:01.938914 kernel: x86/mm: Memory block size: 128MB Oct 9 07:22:01.938925 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 9 07:22:01.938935 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 9 07:22:01.938946 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 9 07:22:01.938961 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 9 07:22:01.938972 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 9 07:22:01.938983 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:22:01.938994 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 07:22:01.939004 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:22:01.939015 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:22:01.939026 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:22:01.939051 kernel: audit: type=2000 audit(1728458521.426:1): state=initialized audit_enabled=0 res=1 Oct 9 07:22:01.939062 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:22:01.939096 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:22:01.939125 kernel: cpuidle: using governor menu Oct 9 07:22:01.939136 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:22:01.939147 kernel: dca service started, version 1.12.1 Oct 9 07:22:01.939158 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 07:22:01.939168 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 07:22:01.939179 kernel: PCI: Using configuration type 1 for base access Oct 9 07:22:01.939191 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:22:01.939206 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 07:22:01.939223 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 07:22:01.939234 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:22:01.939244 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:22:01.939255 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:22:01.939265 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:22:01.939285 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:22:01.939296 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:22:01.939307 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:22:01.939317 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:22:01.939332 kernel: ACPI: Interpreter enabled Oct 9 07:22:01.939343 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 07:22:01.939353 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:22:01.939364 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:22:01.939374 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:22:01.939385 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 07:22:01.939396 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:22:01.939669 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:22:01.939853 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 07:22:01.940015 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 07:22:01.940031 kernel: PCI host bridge to bus 0000:00 Oct 9 07:22:01.940321 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:22:01.940491 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:22:01.940641 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:22:01.940791 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 07:22:01.940952 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 07:22:01.941124 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 9 07:22:01.941287 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:22:01.941495 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 07:22:01.941701 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 07:22:01.941878 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 9 07:22:01.942083 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 9 07:22:01.942259 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 9 07:22:01.942445 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 9 07:22:01.942619 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:22:01.942819 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 07:22:01.942981 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 9 07:22:01.943158 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 9 07:22:01.943330 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 9 07:22:01.943510 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:22:01.943676 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 9 07:22:01.943850 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 9 07:22:01.944026 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 9 07:22:01.944252 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:22:01.944440 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 9 07:22:01.944623 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 9 07:22:01.944789 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 9 07:22:01.944955 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 9 07:22:01.945171 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 07:22:01.945346 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 07:22:01.945531 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 07:22:01.945696 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 9 07:22:01.945865 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 9 07:22:01.946131 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 07:22:01.946308 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 9 07:22:01.946325 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:22:01.946336 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:22:01.946347 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:22:01.946357 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:22:01.946374 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 07:22:01.946385 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 07:22:01.946395 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 07:22:01.946406 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 07:22:01.946416 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 07:22:01.946427 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 07:22:01.946438 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 07:22:01.946448 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 07:22:01.946459 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 07:22:01.946474 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 07:22:01.946484 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 07:22:01.946495 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 07:22:01.946506 kernel: iommu: Default domain type: Translated Oct 9 07:22:01.946517 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:22:01.946527 kernel: efivars: Registered efivars operations Oct 9 07:22:01.946537 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:22:01.946548 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:22:01.946559 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 9 07:22:01.946573 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 9 07:22:01.946583 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 9 07:22:01.946594 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 9 07:22:01.946755 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 07:22:01.946912 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 07:22:01.947114 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:22:01.947133 kernel: vgaarb: loaded Oct 9 07:22:01.947145 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:22:01.947156 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:22:01.947173 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:22:01.947184 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:22:01.947194 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:22:01.947205 kernel: pnp: PnP ACPI init Oct 9 07:22:01.947409 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 07:22:01.947427 kernel: pnp: PnP ACPI: found 6 devices Oct 9 07:22:01.947438 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:22:01.947449 kernel: NET: Registered PF_INET protocol family Oct 9 07:22:01.947465 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 07:22:01.947476 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 07:22:01.947487 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:22:01.947498 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:22:01.947509 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 07:22:01.947520 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 07:22:01.947530 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 07:22:01.947541 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 07:22:01.947551 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:22:01.947565 kernel: NET: Registered PF_XDP protocol family Oct 9 07:22:01.947737 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 9 07:22:01.947922 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 9 07:22:01.948166 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:22:01.948334 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:22:01.948493 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:22:01.948650 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 07:22:01.948809 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 07:22:01.948968 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 9 07:22:01.948984 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:22:01.948996 kernel: Initialise system trusted keyrings Oct 9 07:22:01.949007 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 07:22:01.949018 kernel: Key type asymmetric registered Oct 9 07:22:01.949028 kernel: Asymmetric key parser 'x509' registered Oct 9 07:22:01.949056 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:22:01.949068 kernel: io scheduler mq-deadline registered Oct 9 07:22:01.949078 kernel: io scheduler kyber registered Oct 9 07:22:01.949095 kernel: io scheduler bfq registered Oct 9 07:22:01.949106 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:22:01.949117 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 07:22:01.949128 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 07:22:01.949139 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 07:22:01.949150 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:22:01.949161 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:22:01.949172 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:22:01.949183 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:22:01.949197 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:22:01.949208 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:22:01.949401 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 07:22:01.949559 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 07:22:01.949712 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T07:22:01 UTC (1728458521) Oct 9 07:22:01.949865 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 07:22:01.949880 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 07:22:01.949896 kernel: efifb: probing for efifb Oct 9 07:22:01.949907 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Oct 9 07:22:01.949918 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Oct 9 07:22:01.949929 kernel: efifb: scrolling: redraw Oct 9 07:22:01.949940 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Oct 9 07:22:01.949951 kernel: Console: switching to colour frame buffer device 100x37 Oct 9 07:22:01.949962 kernel: fb0: EFI VGA frame buffer device Oct 9 07:22:01.949994 kernel: pstore: Using crash dump compression: deflate Oct 9 07:22:01.950009 kernel: pstore: Registered efi_pstore as persistent store backend Oct 9 07:22:01.950023 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:22:01.950049 kernel: Segment Routing with IPv6 Oct 9 07:22:01.950061 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:22:01.950073 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:22:01.950084 kernel: Key type dns_resolver registered Oct 9 07:22:01.950095 kernel: IPI shorthand broadcast: enabled Oct 9 07:22:01.950106 kernel: sched_clock: Marking stable (923001815, 114667124)->(1101587410, -63918471) Oct 9 07:22:01.950117 kernel: registered taskstats version 1 Oct 9 07:22:01.950128 kernel: Loading compiled-in X.509 certificates Oct 9 07:22:01.950139 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:22:01.950155 kernel: Key type .fscrypt registered Oct 9 07:22:01.950166 kernel: Key type fscrypt-provisioning registered Oct 9 07:22:01.950178 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:22:01.950189 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:22:01.950200 kernel: ima: No architecture policies found Oct 9 07:22:01.950211 kernel: clk: Disabling unused clocks Oct 9 07:22:01.950222 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:22:01.950234 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:22:01.950249 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:22:01.950260 kernel: Run /init as init process Oct 9 07:22:01.950279 kernel: with arguments: Oct 9 07:22:01.950290 kernel: /init Oct 9 07:22:01.950300 kernel: with environment: Oct 9 07:22:01.950311 kernel: HOME=/ Oct 9 07:22:01.950322 kernel: TERM=linux Oct 9 07:22:01.950334 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:22:01.950348 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:22:01.950366 systemd[1]: Detected virtualization kvm. Oct 9 07:22:01.950378 systemd[1]: Detected architecture x86-64. Oct 9 07:22:01.950389 systemd[1]: Running in initrd. Oct 9 07:22:01.950401 systemd[1]: No hostname configured, using default hostname. Oct 9 07:22:01.950421 systemd[1]: Hostname set to <localhost>. Oct 9 07:22:01.950432 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:22:01.950443 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:22:01.950454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:22:01.950465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:22:01.950477 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:22:01.950488 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:22:01.950499 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:22:01.950513 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:22:01.950526 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:22:01.950538 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:22:01.950549 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:22:01.950560 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:22:01.950570 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:22:01.950581 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:22:01.950596 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:22:01.950608 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:22:01.950620 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:22:01.950632 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:22:01.950644 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:22:01.950658 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:22:01.950669 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:22:01.950678 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:22:01.950686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:22:01.950698 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:22:01.950707 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:22:01.950715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:22:01.950723 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:22:01.950732 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:22:01.950740 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:22:01.950749 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:22:01.950757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:22:01.950768 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:22:01.950776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:22:01.950784 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:22:01.950815 systemd-journald[193]: Collecting audit messages is disabled. Oct 9 07:22:01.950837 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:22:01.950846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:22:01.950855 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:22:01.950864 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:22:01.950875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:22:01.950883 systemd-journald[193]: Journal started Oct 9 07:22:01.950902 systemd-journald[193]: Runtime Journal (/run/log/journal/089995407a2249f4b87f73be03e4b20a) is 6.0M, max 48.3M, 42.3M free. Oct 9 07:22:01.937266 systemd-modules-load[194]: Inserted module 'overlay' Oct 9 07:22:01.952500 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:22:01.959117 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:22:01.969055 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:22:01.971880 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 9 07:22:01.973087 kernel: Bridge firewalling registered Oct 9 07:22:01.973468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:22:01.976485 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:22:01.978881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:22:01.981592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:22:01.999166 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:22:02.000299 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:22:02.013353 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:22:02.016807 dracut-cmdline[223]: dracut-dracut-053 Oct 9 07:22:02.023488 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:22:02.023263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:22:02.062123 systemd-resolved[237]: Positive Trust Anchors: Oct 9 07:22:02.062140 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:22:02.062171 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:22:02.064716 systemd-resolved[237]: Defaulting to hostname 'linux'. Oct 9 07:22:02.065937 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:22:02.070908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:22:02.129073 kernel: SCSI subsystem initialized Oct 9 07:22:02.140068 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:22:02.152064 kernel: iscsi: registered transport (tcp) Oct 9 07:22:02.177187 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:22:02.177212 kernel: QLogic iSCSI HBA Driver Oct 9 07:22:02.230751 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:22:02.242261 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:22:02.269783 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:22:02.269827 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:22:02.269847 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:22:02.315057 kernel: raid6: avx2x4 gen() 30626 MB/s Oct 9 07:22:02.332054 kernel: raid6: avx2x2 gen() 31400 MB/s Oct 9 07:22:02.349133 kernel: raid6: avx2x1 gen() 25968 MB/s Oct 9 07:22:02.349153 kernel: raid6: using algorithm avx2x2 gen() 31400 MB/s Oct 9 07:22:02.367140 kernel: raid6: .... xor() 19924 MB/s, rmw enabled Oct 9 07:22:02.367168 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:22:02.392063 kernel: xor: automatically using best checksumming function avx Oct 9 07:22:02.569064 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:22:02.582146 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:22:02.602209 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:22:02.614487 systemd-udevd[415]: Using default interface naming scheme 'v255'. Oct 9 07:22:02.619088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:22:02.632204 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:22:02.645478 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Oct 9 07:22:02.677481 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:22:02.687228 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:22:02.754875 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:22:02.768182 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:22:02.783072 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:22:02.786383 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:22:02.789678 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:22:02.792363 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:22:02.798060 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 07:22:02.799918 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 07:22:02.802376 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:22:02.802399 kernel: GPT:9289727 != 19775487 Oct 9 07:22:02.802411 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:22:02.802426 kernel: GPT:9289727 != 19775487 Oct 9 07:22:02.802436 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:22:02.802446 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:22:02.802221 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:22:02.812593 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:22:02.816325 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:22:02.831054 kernel: libata version 3.00 loaded. Oct 9 07:22:02.844061 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:22:02.844089 kernel: AES CTR mode by8 optimization enabled Oct 9 07:22:02.845062 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 07:22:02.845507 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:22:02.861969 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 07:22:02.861991 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 07:22:02.862418 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 07:22:02.862565 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Oct 9 07:22:02.862576 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (463) Oct 9 07:22:02.862598 kernel: scsi host0: ahci Oct 9 07:22:02.862755 kernel: scsi host1: ahci Oct 9 07:22:02.862901 kernel: scsi host2: ahci Oct 9 07:22:02.863978 kernel: scsi host3: ahci Oct 9 07:22:02.864204 kernel: scsi host4: ahci Oct 9 07:22:02.864370 kernel: scsi host5: ahci Oct 9 07:22:02.864530 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 9 07:22:02.845733 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:22:02.872499 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 9 07:22:02.872516 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 9 07:22:02.872526 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 9 07:22:02.872539 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 9 07:22:02.872550 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 9 07:22:02.850498 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:22:02.854276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:22:02.854443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:22:02.857719 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:22:02.865898 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:22:02.884286 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:22:02.889056 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:22:02.892339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:22:02.901757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:22:02.905778 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:22:02.906051 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:22:02.921154 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:22:02.922254 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:22:02.933219 disk-uuid[557]: Primary Header is updated. Oct 9 07:22:02.933219 disk-uuid[557]: Secondary Entries is updated. Oct 9 07:22:02.933219 disk-uuid[557]: Secondary Header is updated. Oct 9 07:22:02.937202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:22:02.942093 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:22:02.944715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:22:03.174946 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 07:22:03.175003 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 07:22:03.175015 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 07:22:03.175027 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 07:22:03.176056 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 07:22:03.177055 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 07:22:03.178195 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 07:22:03.178209 kernel: ata3.00: applying bridge limits Oct 9 07:22:03.179049 kernel: ata3.00: configured for UDMA/100 Oct 9 07:22:03.180059 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 07:22:03.233641 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 07:22:03.233945 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 07:22:03.252073 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 07:22:03.944942 disk-uuid[560]: The operation has completed successfully. Oct 9 07:22:03.946470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:22:03.975964 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:22:03.976131 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:22:03.997266 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:22:04.000912 sh[593]: Success Oct 9 07:22:04.015068 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 07:22:04.049295 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:22:04.063621 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:22:04.068380 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:22:04.078067 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:22:04.078095 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:22:04.079943 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:22:04.079957 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:22:04.080689 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:22:04.085455 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:22:04.088610 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:22:04.099329 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:22:04.102128 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:22:04.111315 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:22:04.111346 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:22:04.111357 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:22:04.115065 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:22:04.125759 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:22:04.127646 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:22:04.137613 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:22:04.146232 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:22:04.205613 ignition[689]: Ignition 2.18.0 Oct 9 07:22:04.205623 ignition[689]: Stage: fetch-offline Oct 9 07:22:04.205672 ignition[689]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:22:04.205684 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:22:04.205893 ignition[689]: parsed url from cmdline: "" Oct 9 07:22:04.205897 ignition[689]: no config URL provided Oct 9 07:22:04.205903 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:22:04.205913 ignition[689]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:22:04.205957 ignition[689]: op(1): [started] loading QEMU firmware config module Oct 9 07:22:04.205963 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 07:22:04.215381 ignition[689]: op(1): [finished] loading QEMU firmware config module Oct 9 07:22:04.225578 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:22:04.239234 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:22:04.258088 ignition[689]: parsing config with SHA512: 5613ed44ec1d435139df7b646ce1ef2254999d579664a68e1bd5061a5ae3922a3d89fa1ef3f11e9b46545b7b9126691ef9cacb662e3dc85894a2afd85ffb62f2 Oct 9 07:22:04.263369 systemd-networkd[781]: lo: Link UP Oct 9 07:22:04.263379 systemd-networkd[781]: lo: Gained carrier Oct 9 07:22:04.264603 unknown[689]: fetched base config from "system" Oct 9 07:22:04.265653 ignition[689]: fetch-offline: fetch-offline passed Oct 9 07:22:04.264617 unknown[689]: fetched user config from "qemu" Oct 9 07:22:04.265750 ignition[689]: Ignition finished successfully Oct 9 07:22:04.266157 systemd-networkd[781]: Enumeration completed Oct 9 07:22:04.266294 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:22:04.266708 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:22:04.266712 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:22:04.268343 systemd-networkd[781]: eth0: Link UP Oct 9 07:22:04.268347 systemd-networkd[781]: eth0: Gained carrier Oct 9 07:22:04.268354 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:22:04.268522 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:22:04.271020 systemd[1]: Reached target network.target - Network. Oct 9 07:22:04.272221 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 07:22:04.279092 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 07:22:04.279186 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:22:04.294441 ignition[784]: Ignition 2.18.0 Oct 9 07:22:04.294452 ignition[784]: Stage: kargs Oct 9 07:22:04.294622 ignition[784]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:22:04.294635 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:22:04.295583 ignition[784]: kargs: kargs passed Oct 9 07:22:04.295628 ignition[784]: Ignition finished successfully Oct 9 07:22:04.299422 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:22:04.312169 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:22:04.329025 ignition[794]: Ignition 2.18.0 Oct 9 07:22:04.329051 ignition[794]: Stage: disks Oct 9 07:22:04.329222 ignition[794]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:22:04.329235 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:22:04.330117 ignition[794]: disks: disks passed Oct 9 07:22:04.332663 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:22:04.330164 ignition[794]: Ignition finished successfully Oct 9 07:22:04.334177 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:22:04.336021 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:22:04.337490 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:22:04.339722 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:22:04.340897 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:22:04.353182 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:22:04.364637 systemd-resolved[237]: Detected conflict on linux IN A 10.0.0.95 Oct 9 07:22:04.364654 systemd-resolved[237]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Oct 9 07:22:04.367522 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:22:04.373478 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:22:04.392146 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:22:04.493003 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:22:04.495811 kernel: EXT4-fs (vda9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:22:04.494543 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:22:04.511130 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:22:04.512963 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:22:04.514142 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 07:22:04.514186 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:22:04.524969 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) Oct 9 07:22:04.524996 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:22:04.525008 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:22:04.525019 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:22:04.514218 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:22:04.520742 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:22:04.525860 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:22:04.530324 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:22:04.531920 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:22:04.565343 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:22:04.570885 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:22:04.576019 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:22:04.581076 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:22:04.668022 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:22:04.685137 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:22:04.687764 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:22:04.699057 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:22:04.715825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:22:04.725161 ignition[929]: INFO : Ignition 2.18.0 Oct 9 07:22:04.725161 ignition[929]: INFO : Stage: mount Oct 9 07:22:04.726785 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:22:04.726785 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:22:04.726785 ignition[929]: INFO : mount: mount passed Oct 9 07:22:04.726785 ignition[929]: INFO : Ignition finished successfully Oct 9 07:22:04.732537 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:22:04.739255 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:22:05.078449 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:22:05.091208 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:22:05.098648 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Oct 9 07:22:05.098675 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:22:05.098687 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:22:05.099529 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:22:05.103057 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:22:05.104195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:22:05.161566 ignition[958]: INFO : Ignition 2.18.0 Oct 9 07:22:05.161566 ignition[958]: INFO : Stage: files Oct 9 07:22:05.163485 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:22:05.163485 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:22:05.163485 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:22:05.163485 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:22:05.163485 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:22:05.169614 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:22:05.170965 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:22:05.172706 unknown[958]: wrote ssh authorized keys file for user: core Oct 9 07:22:05.173795 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:22:05.176398 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:22:05.178204 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:22:05.179886 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:22:05.181743 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:22:05.224546 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 07:22:05.339107 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:22:05.339107 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:22:05.344377 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:22:05.367070 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 07:22:05.797728 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 07:22:06.256232 systemd-networkd[781]: eth0: Gained IPv6LL Oct 9 07:22:06.681545 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:22:06.681545 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Oct 9 07:22:06.686112 ignition[958]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 07:22:06.717610 ignition[958]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 07:22:06.722996 ignition[958]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 07:22:06.724819 ignition[958]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 07:22:06.724819 ignition[958]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:22:06.724819 ignition[958]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:22:06.724819 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:22:06.724819 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:22:06.724819 ignition[958]: INFO : files: files passed Oct 9 07:22:06.724819 ignition[958]: INFO : Ignition finished successfully Oct 9 07:22:06.726585 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:22:06.739180 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:22:06.741180 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:22:06.743396 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:22:06.743513 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:22:06.754093 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 07:22:06.757087 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:22:06.758989 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:22:06.761740 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:22:06.760487 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:22:06.761954 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:22:06.773215 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:22:06.801125 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:22:06.801268 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:22:06.803578 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:22:06.803980 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:22:06.804405 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:22:06.810387 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:22:06.832072 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:22:06.846211 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:22:06.856903 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:22:06.858180 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:22:06.860342 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:22:06.862315 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:22:06.862436 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:22:06.864547 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:22:06.871131 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:22:06.873109 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:22:06.875088 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:22:06.877061 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:22:06.880647 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:22:06.882743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:22:06.884985 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:22:06.886935 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:22:06.889090 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:22:06.890801 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:22:06.890924 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:22:06.893027 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:22:06.894604 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:22:06.896644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:22:06.896781 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:22:06.898826 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:22:06.898936 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:22:06.901270 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:22:06.901384 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:22:06.903205 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:22:06.915491 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:22:06.919091 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:22:06.921095 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:22:06.923060 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:22:06.924789 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:22:06.924887 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:22:06.926746 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:22:06.926837 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:22:06.929173 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:22:06.929290 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:22:06.931187 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:22:06.931300 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:22:06.941222 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:22:06.943961 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:22:06.945274 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:22:06.945442 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:22:06.947885 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:22:06.948073 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:22:06.954108 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:22:06.954235 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:22:06.958891 ignition[1014]: INFO : Ignition 2.18.0 Oct 9 07:22:06.958891 ignition[1014]: INFO : Stage: umount Oct 9 07:22:06.961055 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:22:06.961055 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:22:06.961055 ignition[1014]: INFO : umount: umount passed Oct 9 07:22:06.961055 ignition[1014]: INFO : Ignition finished successfully Oct 9 07:22:06.962953 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:22:06.963104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:22:06.964642 systemd[1]: Stopped target network.target - Network. Oct 9 07:22:06.966129 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:22:06.966198 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:22:06.968017 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:22:06.968081 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:22:06.969926 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:22:06.969975 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:22:06.971837 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:22:06.971886 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:22:06.973875 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:22:06.975848 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:22:06.979499 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:22:06.981082 systemd-networkd[781]: eth0: DHCPv6 lease lost Oct 9 07:22:06.990461 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:22:06.990668 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:22:06.993090 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:22:06.993286 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:22:06.997260 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:22:06.997346 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:22:07.006212 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:22:07.007142 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:22:07.007211 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:22:07.009333 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:22:07.009385 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:22:07.011532 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:22:07.011583 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:22:07.013975 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:22:07.014027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:22:07.015422 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:22:07.031564 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:22:07.031706 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:22:07.044001 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:22:07.044221 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:22:07.044971 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:22:07.045022 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:22:07.047984 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:22:07.048048 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:22:07.050347 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:22:07.050404 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:22:07.053187 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:22:07.053240 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:22:07.054914 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:22:07.054968 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:22:07.066201 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:22:07.067297 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:22:07.067358 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:22:07.069758 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 07:22:07.069812 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:22:07.072432 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:22:07.072483 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:22:07.075367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:22:07.075418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:22:07.078401 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:22:07.078517 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:22:07.285600 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:22:07.285752 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:22:07.287842 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:22:07.289481 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:22:07.289536 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:22:07.311245 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:22:07.319519 systemd[1]: Switching root. Oct 9 07:22:07.347554 systemd-journald[193]: Journal stopped Oct 9 07:22:08.563371 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 9 07:22:08.563445 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:22:08.563459 kernel: SELinux: policy capability open_perms=1 Oct 9 07:22:08.563476 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:22:08.563487 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:22:08.563499 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:22:08.563510 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:22:08.563522 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:22:08.563538 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:22:08.563555 kernel: audit: type=1403 audit(1728458527.786:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:22:08.563573 systemd[1]: Successfully loaded SELinux policy in 39.446ms. Oct 9 07:22:08.563592 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.859ms. Oct 9 07:22:08.563606 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:22:08.563619 systemd[1]: Detected virtualization kvm. Oct 9 07:22:08.563631 systemd[1]: Detected architecture x86-64. Oct 9 07:22:08.563643 systemd[1]: Detected first boot. Oct 9 07:22:08.563656 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:22:08.563668 zram_generator::config[1075]: No configuration found. Oct 9 07:22:08.563686 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:22:08.563699 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:22:08.563713 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:22:08.563727 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:22:08.563743 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:22:08.563760 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:22:08.563772 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:22:08.563784 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:22:08.563802 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:22:08.563819 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:22:08.563831 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:22:08.563843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:22:08.563856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:22:08.563868 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:22:08.563881 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:22:08.563893 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:22:08.563911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:22:08.563924 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:22:08.563936 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:22:08.563948 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:22:08.563960 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:22:08.563973 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:22:08.563985 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:22:08.563999 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:22:08.564012 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:22:08.564029 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:22:08.564055 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:22:08.564067 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:22:08.564079 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:22:08.564092 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:22:08.564111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:22:08.564123 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:22:08.564135 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:22:08.564148 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:22:08.564167 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:22:08.564179 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:22:08.564192 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:22:08.564204 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:22:08.564216 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:22:08.564228 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:22:08.564240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:22:08.564252 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:22:08.564270 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:22:08.564282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:22:08.564296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:22:08.564308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:22:08.564320 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:22:08.564332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:22:08.564344 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:22:08.564357 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 07:22:08.564369 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 07:22:08.564386 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:22:08.564399 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:22:08.564410 kernel: fuse: init (API version 7.39) Oct 9 07:22:08.564422 kernel: loop: module loaded Oct 9 07:22:08.564434 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:22:08.564447 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:22:08.564459 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:22:08.564472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:22:08.564489 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:22:08.564520 systemd-journald[1158]: Collecting audit messages is disabled. Oct 9 07:22:08.564543 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:22:08.564555 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:22:08.564567 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:22:08.564580 systemd-journald[1158]: Journal started Oct 9 07:22:08.564609 systemd-journald[1158]: Runtime Journal (/run/log/journal/089995407a2249f4b87f73be03e4b20a) is 6.0M, max 48.3M, 42.3M free. Oct 9 07:22:08.568085 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:22:08.570978 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:22:08.572943 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:22:08.574719 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:22:08.576676 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:22:08.576986 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:22:08.578888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:22:08.579215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:22:08.581107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:22:08.581401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:22:08.583350 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:22:08.583637 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:22:08.585417 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:22:08.585696 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:22:08.587600 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:22:08.589535 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:22:08.591589 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:22:08.647611 kernel: ACPI: bus type drm_connector registered Oct 9 07:22:08.649219 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:22:08.649727 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:22:08.659751 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:22:08.664485 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:22:08.674197 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:22:08.676880 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:22:08.678316 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:22:08.681849 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:22:08.685238 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:22:08.685554 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:22:08.687185 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:22:08.689308 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:22:08.692264 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:22:08.695768 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:22:08.698891 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:22:08.706122 systemd-journald[1158]: Time spent on flushing to /var/log/journal/089995407a2249f4b87f73be03e4b20a is 21.062ms for 983 entries. Oct 9 07:22:08.706122 systemd-journald[1158]: System Journal (/var/log/journal/089995407a2249f4b87f73be03e4b20a) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:22:08.739639 systemd-journald[1158]: Received client request to flush runtime journal. Oct 9 07:22:08.707399 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:22:08.715471 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:22:08.717449 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:22:08.737915 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Oct 9 07:22:08.737929 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Oct 9 07:22:08.742491 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:22:08.745079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:22:08.750615 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:22:08.761306 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:22:08.763156 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:22:08.767164 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:22:08.782919 udevadm[1233]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 07:22:08.790333 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:22:08.798186 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:22:08.817124 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Oct 9 07:22:08.817147 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Oct 9 07:22:08.823444 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:22:09.426016 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:22:09.440191 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:22:09.465059 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Oct 9 07:22:09.480473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:22:09.494209 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:22:09.510239 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:22:09.537063 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1252) Oct 9 07:22:09.593082 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1256) Oct 9 07:22:09.623435 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 9 07:22:09.644769 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:22:09.662057 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:22:09.668079 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:22:09.670885 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:22:09.685107 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 07:22:09.710076 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 9 07:22:09.710346 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 07:22:09.715064 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 07:22:09.715276 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 07:22:09.715443 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:22:09.724258 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:22:09.727568 systemd-networkd[1251]: lo: Link UP Oct 9 07:22:09.727580 systemd-networkd[1251]: lo: Gained carrier Oct 9 07:22:09.729239 systemd-networkd[1251]: Enumeration completed Oct 9 07:22:09.729352 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:22:09.729653 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:22:09.729657 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:22:09.730911 systemd-networkd[1251]: eth0: Link UP Oct 9 07:22:09.730916 systemd-networkd[1251]: eth0: Gained carrier Oct 9 07:22:09.730927 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:22:09.733434 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:22:09.746101 systemd-networkd[1251]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 07:22:09.753540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:22:09.753911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:22:09.760343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:22:09.838352 kernel: kvm_amd: TSC scaling supported Oct 9 07:22:09.838415 kernel: kvm_amd: Nested Virtualization enabled Oct 9 07:22:09.838456 kernel: kvm_amd: Nested Paging enabled Oct 9 07:22:09.838469 kernel: kvm_amd: LBR virtualization supported Oct 9 07:22:09.839559 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 07:22:09.839574 kernel: kvm_amd: Virtual GIF supported Oct 9 07:22:09.857962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:22:09.864088 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:22:09.897579 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:22:09.912178 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:22:09.922621 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:22:09.955423 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:22:09.957073 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:22:09.968329 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:22:09.974786 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:22:10.072572 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:22:10.074863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:22:10.079142 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:22:10.079189 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:22:10.080533 systemd[1]: Reached target machines.target - Containers. Oct 9 07:22:10.083402 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:22:10.100346 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:22:10.103450 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:22:10.104811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:22:10.105882 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:22:10.108464 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:22:10.111956 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:22:10.114833 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:22:10.126075 kernel: loop0: detected capacity change from 0 to 211296 Oct 9 07:22:10.128064 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:22:10.136285 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:22:10.146435 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:22:10.147833 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:22:10.151339 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:22:10.181090 kernel: loop1: detected capacity change from 0 to 80568 Oct 9 07:22:10.260926 kernel: loop2: detected capacity change from 0 to 139904 Oct 9 07:22:10.377073 kernel: loop3: detected capacity change from 0 to 211296 Oct 9 07:22:10.387075 kernel: loop4: detected capacity change from 0 to 80568 Oct 9 07:22:10.395108 kernel: loop5: detected capacity change from 0 to 139904 Oct 9 07:22:10.406646 (sd-merge)[1324]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 07:22:10.407279 (sd-merge)[1324]: Merged extensions into '/usr'. Oct 9 07:22:10.411569 systemd[1]: Reloading requested from client PID 1312 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:22:10.411590 systemd[1]: Reloading... Oct 9 07:22:10.525135 zram_generator::config[1350]: No configuration found. Oct 9 07:22:10.571025 ldconfig[1309]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:22:10.664833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:22:10.731708 systemd[1]: Reloading finished in 319 ms. Oct 9 07:22:10.751966 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:22:10.753631 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:22:10.767192 systemd[1]: Starting ensure-sysext.service... Oct 9 07:22:10.769512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:22:10.774142 systemd[1]: Reloading requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:22:10.774158 systemd[1]: Reloading... Oct 9 07:22:10.803700 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:22:10.804105 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:22:10.805180 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:22:10.805519 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Oct 9 07:22:10.805604 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Oct 9 07:22:10.812454 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:22:10.812474 systemd-tmpfiles[1399]: Skipping /boot Oct 9 07:22:10.828767 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:22:10.828786 systemd-tmpfiles[1399]: Skipping /boot Oct 9 07:22:10.830959 zram_generator::config[1425]: No configuration found. Oct 9 07:22:10.992649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:22:11.055183 systemd-networkd[1251]: eth0: Gained IPv6LL Oct 9 07:22:11.056870 systemd[1]: Reloading finished in 282 ms. Oct 9 07:22:11.080102 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:22:11.096856 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:22:11.106004 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:22:11.108743 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:22:11.111605 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:22:11.116910 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:22:11.122757 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:22:11.126320 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:22:11.126496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:22:11.131231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:22:11.142763 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:22:11.147276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:22:11.148580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:22:11.148680 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:22:11.150812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:22:11.151068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:22:11.153726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:22:11.153955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:22:11.158940 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:22:11.159273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:22:11.165311 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:22:11.173805 augenrules[1505]: No rules Oct 9 07:22:11.190394 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:22:11.194379 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:22:11.197366 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:22:11.197694 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:22:11.202291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:22:11.206755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:22:11.212016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:22:11.216569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:22:11.220432 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:22:11.221645 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:22:11.223105 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:22:11.226621 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:22:11.226857 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:22:11.228552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:22:11.228804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:22:11.231250 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:22:11.231531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:22:11.244276 systemd[1]: Finished ensure-sysext.service. Oct 9 07:22:11.246071 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:22:11.250937 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:22:11.251206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:22:11.254842 systemd-resolved[1476]: Positive Trust Anchors: Oct 9 07:22:11.254858 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:22:11.254890 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:22:11.255250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:22:11.258474 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:22:11.259130 systemd-resolved[1476]: Defaulting to hostname 'linux'. Oct 9 07:22:11.263422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:22:11.269275 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:22:11.270757 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:22:11.274289 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:22:11.275696 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:22:11.275735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:22:11.276112 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:22:11.293765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:22:11.294151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:22:11.296085 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:22:11.296389 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:22:11.298212 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:22:11.298507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:22:11.300474 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:22:11.300804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:22:11.307798 systemd[1]: Reached target network.target - Network. Oct 9 07:22:11.309132 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:22:11.310550 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:22:11.312169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:22:11.312284 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:22:11.375060 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:22:12.241481 systemd-resolved[1476]: Clock change detected. Flushing caches. Oct 9 07:22:12.241530 systemd-timesyncd[1541]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 07:22:12.241574 systemd-timesyncd[1541]: Initial clock synchronization to Wed 2024-10-09 07:22:12.241378 UTC. Oct 9 07:22:12.242361 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:22:12.243590 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:22:12.244981 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:22:12.246260 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:22:12.247575 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:22:12.247604 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:22:12.248545 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:22:12.249833 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:22:12.251069 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:22:12.252325 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:22:12.254114 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:22:12.257217 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:22:12.259764 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:22:12.268776 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:22:12.269897 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:22:12.270878 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:22:12.271998 systemd[1]: System is tainted: cgroupsv1 Oct 9 07:22:12.272036 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:22:12.272057 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:22:12.273388 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:22:12.275668 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 07:22:12.277868 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:22:12.282546 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:22:12.285693 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:22:12.287702 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:22:12.290875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:22:12.291332 jq[1554]: false Oct 9 07:22:12.294622 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:22:12.298622 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:22:12.303874 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:22:12.306914 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:22:12.314787 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:22:12.319105 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:22:12.321358 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:22:12.324329 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:22:12.330175 extend-filesystems[1557]: Found loop3 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found loop4 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found loop5 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found sr0 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found vda Oct 9 07:22:12.331646 extend-filesystems[1557]: Found vda1 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found vda2 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found vda3 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found usr Oct 9 07:22:12.331646 extend-filesystems[1557]: Found vda4 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found vda6 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found vda7 Oct 9 07:22:12.331646 extend-filesystems[1557]: Found vda9 Oct 9 07:22:12.331646 extend-filesystems[1557]: Checking size of /dev/vda9 Oct 9 07:22:12.331803 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:22:12.332725 dbus-daemon[1553]: [system] SELinux support is enabled Oct 9 07:22:12.342128 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:22:12.356644 update_engine[1577]: I1009 07:22:12.351784 1577 main.cc:92] Flatcar Update Engine starting Oct 9 07:22:12.356644 update_engine[1577]: I1009 07:22:12.353710 1577 update_check_scheduler.cc:74] Next update check in 6m55s Oct 9 07:22:12.357978 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:22:12.358763 jq[1580]: true Oct 9 07:22:12.358310 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:22:12.359162 extend-filesystems[1557]: Resized partition /dev/vda9 Oct 9 07:22:12.360880 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:22:12.361182 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:22:12.368358 extend-filesystems[1596]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:22:12.368335 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:22:12.378787 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1258) Oct 9 07:22:12.378225 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:22:12.378544 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:22:12.392695 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 07:22:12.414960 (ntainerd)[1602]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:22:12.417488 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 07:22:12.419536 jq[1600]: true Oct 9 07:22:12.417886 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 07:22:12.442589 tar[1598]: linux-amd64/helm Oct 9 07:22:12.448633 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:22:12.450530 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:22:12.450094 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:22:12.450192 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:22:12.450212 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:22:12.451557 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:22:12.451578 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:22:12.453481 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:22:12.464685 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:22:12.482915 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:22:12.485795 systemd-logind[1575]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:22:12.485827 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:22:12.486318 systemd-logind[1575]: New seat seat0. Oct 9 07:22:12.499761 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:22:12.500947 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:22:12.509859 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:22:12.510189 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:22:12.516661 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:22:12.612346 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:22:12.633745 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:22:12.636374 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:22:12.637679 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:22:12.641643 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:22:12.759486 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 07:22:13.332229 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:22:13.332229 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 07:22:13.332229 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 07:22:13.338521 extend-filesystems[1557]: Resized filesystem in /dev/vda9 Oct 9 07:22:13.336556 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:22:13.339550 containerd[1602]: time="2024-10-09T07:22:13.332609492Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:22:13.336896 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:22:13.360549 containerd[1602]: time="2024-10-09T07:22:13.360474398Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:22:13.360549 containerd[1602]: time="2024-10-09T07:22:13.360533078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:22:13.362192 containerd[1602]: time="2024-10-09T07:22:13.362133920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:22:13.362192 containerd[1602]: time="2024-10-09T07:22:13.362182021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:22:13.362612 containerd[1602]: time="2024-10-09T07:22:13.362581039Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:22:13.362612 containerd[1602]: time="2024-10-09T07:22:13.362605765Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:22:13.362729 containerd[1602]: time="2024-10-09T07:22:13.362710031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:22:13.362809 containerd[1602]: time="2024-10-09T07:22:13.362789340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:22:13.362831 containerd[1602]: time="2024-10-09T07:22:13.362807153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:22:13.362919 containerd[1602]: time="2024-10-09T07:22:13.362901961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:22:13.363197 containerd[1602]: time="2024-10-09T07:22:13.363169613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:22:13.363219 containerd[1602]: time="2024-10-09T07:22:13.363193988Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:22:13.363219 containerd[1602]: time="2024-10-09T07:22:13.363204638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:22:13.363404 containerd[1602]: time="2024-10-09T07:22:13.363378474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:22:13.363404 containerd[1602]: time="2024-10-09T07:22:13.363399544Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:22:13.363510 containerd[1602]: time="2024-10-09T07:22:13.363484954Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:22:13.363510 containerd[1602]: time="2024-10-09T07:22:13.363503860Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:22:13.437158 tar[1598]: linux-amd64/LICENSE Oct 9 07:22:13.437158 tar[1598]: linux-amd64/README.md Oct 9 07:22:13.481097 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:22:13.707895 bash[1642]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:22:13.710204 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:22:13.712332 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 07:22:13.776886 containerd[1602]: time="2024-10-09T07:22:13.776823321Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:22:13.776886 containerd[1602]: time="2024-10-09T07:22:13.776870670Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:22:13.776886 containerd[1602]: time="2024-10-09T07:22:13.776885618Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:22:13.776994 containerd[1602]: time="2024-10-09T07:22:13.776921135Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:22:13.776994 containerd[1602]: time="2024-10-09T07:22:13.776935091Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:22:13.776994 containerd[1602]: time="2024-10-09T07:22:13.776947264Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:22:13.776994 containerd[1602]: time="2024-10-09T07:22:13.776963474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:22:13.777173 containerd[1602]: time="2024-10-09T07:22:13.777140366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:22:13.777173 containerd[1602]: time="2024-10-09T07:22:13.777162918Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:22:13.777225 containerd[1602]: time="2024-10-09T07:22:13.777179719Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:22:13.777225 containerd[1602]: time="2024-10-09T07:22:13.777195770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:22:13.777225 containerd[1602]: time="2024-10-09T07:22:13.777211960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:22:13.777278 containerd[1602]: time="2024-10-09T07:22:13.777229793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:22:13.777278 containerd[1602]: time="2024-10-09T07:22:13.777243098Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:22:13.777278 containerd[1602]: time="2024-10-09T07:22:13.777254620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:22:13.777278 containerd[1602]: time="2024-10-09T07:22:13.777268215Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:22:13.777351 containerd[1602]: time="2024-10-09T07:22:13.777280789Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:22:13.777351 containerd[1602]: time="2024-10-09T07:22:13.777294535Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:22:13.777351 containerd[1602]: time="2024-10-09T07:22:13.777306898Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:22:13.777473 containerd[1602]: time="2024-10-09T07:22:13.777438735Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:22:13.777847 containerd[1602]: time="2024-10-09T07:22:13.777826212Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:22:13.777901 containerd[1602]: time="2024-10-09T07:22:13.777856869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.777901 containerd[1602]: time="2024-10-09T07:22:13.777871817Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:22:13.777949 containerd[1602]: time="2024-10-09T07:22:13.777899329Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:22:13.777992 containerd[1602]: time="2024-10-09T07:22:13.777972326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.777992 containerd[1602]: time="2024-10-09T07:22:13.777990390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778037 containerd[1602]: time="2024-10-09T07:22:13.778003234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778037 containerd[1602]: time="2024-10-09T07:22:13.778015296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778037 containerd[1602]: time="2024-10-09T07:22:13.778028672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778094 containerd[1602]: time="2024-10-09T07:22:13.778041966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778094 containerd[1602]: time="2024-10-09T07:22:13.778054240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778094 containerd[1602]: time="2024-10-09T07:22:13.778065972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778094 containerd[1602]: time="2024-10-09T07:22:13.778092060Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:22:13.778284 containerd[1602]: time="2024-10-09T07:22:13.778258132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778284 containerd[1602]: time="2024-10-09T07:22:13.778285804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778339 containerd[1602]: time="2024-10-09T07:22:13.778302425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778339 containerd[1602]: time="2024-10-09T07:22:13.778315279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778339 containerd[1602]: time="2024-10-09T07:22:13.778327362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778405 containerd[1602]: time="2024-10-09T07:22:13.778339855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778405 containerd[1602]: time="2024-10-09T07:22:13.778353761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778405 containerd[1602]: time="2024-10-09T07:22:13.778364091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:22:13.778708 containerd[1602]: time="2024-10-09T07:22:13.778651059Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:22:13.778708 containerd[1602]: time="2024-10-09T07:22:13.778713396Z" level=info msg="Connect containerd service" Oct 9 07:22:13.778878 containerd[1602]: time="2024-10-09T07:22:13.778740547Z" level=info msg="using legacy CRI server" Oct 9 07:22:13.778878 containerd[1602]: time="2024-10-09T07:22:13.778748231Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:22:13.778878 containerd[1602]: time="2024-10-09T07:22:13.778830726Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:22:13.779782 containerd[1602]: time="2024-10-09T07:22:13.779741624Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:22:13.780418 containerd[1602]: time="2024-10-09T07:22:13.779806295Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:22:13.780418 containerd[1602]: time="2024-10-09T07:22:13.779848304Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:22:13.780418 containerd[1602]: time="2024-10-09T07:22:13.779868732Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:22:13.780418 containerd[1602]: time="2024-10-09T07:22:13.779895152Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:22:13.780418 containerd[1602]: time="2024-10-09T07:22:13.780273161Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:22:13.780546 containerd[1602]: time="2024-10-09T07:22:13.780291465Z" level=info msg="Start subscribing containerd event" Oct 9 07:22:13.780546 containerd[1602]: time="2024-10-09T07:22:13.780512780Z" level=info msg="Start recovering state" Oct 9 07:22:13.780669 containerd[1602]: time="2024-10-09T07:22:13.780624420Z" level=info msg="Start event monitor" Oct 9 07:22:13.780669 containerd[1602]: time="2024-10-09T07:22:13.780637654Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:22:13.780669 containerd[1602]: time="2024-10-09T07:22:13.780646982Z" level=info msg="Start snapshots syncer" Oct 9 07:22:13.780669 containerd[1602]: time="2024-10-09T07:22:13.780679944Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:22:13.780899 containerd[1602]: time="2024-10-09T07:22:13.780691826Z" level=info msg="Start streaming server" Oct 9 07:22:13.780899 containerd[1602]: time="2024-10-09T07:22:13.780796893Z" level=info msg="containerd successfully booted in 0.621830s" Oct 9 07:22:13.781140 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:22:14.369603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:22:14.371245 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:22:14.373602 systemd[1]: Startup finished in 7.136s (kernel) + 5.760s (userspace) = 12.897s. Oct 9 07:22:14.375793 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:22:15.185529 kubelet[1687]: E1009 07:22:15.185320 1687 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:22:15.189986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:22:15.190321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:22:16.355076 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:22:16.362690 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:51898.service - OpenSSH per-connection server daemon (10.0.0.1:51898). Oct 9 07:22:16.399652 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 51898 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:22:16.401536 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:16.410759 systemd-logind[1575]: New session 1 of user core. Oct 9 07:22:16.411986 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:22:16.426700 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:22:16.439889 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:22:16.442705 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:22:16.462000 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:16.564002 systemd[1707]: Queued start job for default target default.target. Oct 9 07:22:16.564466 systemd[1707]: Created slice app.slice - User Application Slice. Oct 9 07:22:16.564488 systemd[1707]: Reached target paths.target - Paths. Oct 9 07:22:16.564500 systemd[1707]: Reached target timers.target - Timers. Oct 9 07:22:16.578527 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:22:16.585587 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:22:16.585659 systemd[1707]: Reached target sockets.target - Sockets. Oct 9 07:22:16.585673 systemd[1707]: Reached target basic.target - Basic System. Oct 9 07:22:16.585713 systemd[1707]: Reached target default.target - Main User Target. Oct 9 07:22:16.585745 systemd[1707]: Startup finished in 117ms. Oct 9 07:22:16.586466 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:22:16.588159 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:22:16.648802 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:51900.service - OpenSSH per-connection server daemon (10.0.0.1:51900). Oct 9 07:22:16.680682 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 51900 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:22:16.682331 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:16.686206 systemd-logind[1575]: New session 2 of user core. Oct 9 07:22:16.695708 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:22:16.748852 sshd[1720]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:16.760716 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:51904.service - OpenSSH per-connection server daemon (10.0.0.1:51904). Oct 9 07:22:16.761475 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:51900.service: Deactivated successfully. Oct 9 07:22:16.763440 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:22:16.764158 systemd-logind[1575]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:22:16.765440 systemd-logind[1575]: Removed session 2. Oct 9 07:22:16.787841 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 51904 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:22:16.789182 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:16.793148 systemd-logind[1575]: New session 3 of user core. Oct 9 07:22:16.800731 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:22:16.849816 sshd[1725]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:16.858696 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:51916.service - OpenSSH per-connection server daemon (10.0.0.1:51916). Oct 9 07:22:16.859481 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:51904.service: Deactivated successfully. Oct 9 07:22:16.861372 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:22:16.862067 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:22:16.863517 systemd-logind[1575]: Removed session 3. Oct 9 07:22:16.886687 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 51916 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:22:16.888021 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:16.891938 systemd-logind[1575]: New session 4 of user core. Oct 9 07:22:16.901703 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:22:16.963930 sshd[1733]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:16.977705 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:51932.service - OpenSSH per-connection server daemon (10.0.0.1:51932). Oct 9 07:22:16.978333 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:51916.service: Deactivated successfully. Oct 9 07:22:16.980244 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:22:16.981027 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:22:16.982477 systemd-logind[1575]: Removed session 4. Oct 9 07:22:17.006195 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 51932 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:22:17.007773 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:17.011814 systemd-logind[1575]: New session 5 of user core. Oct 9 07:22:17.027847 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:22:17.086404 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:22:17.086720 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:22:17.102475 sudo[1748]: pam_unix(sudo:session): session closed for user root Oct 9 07:22:17.104635 sshd[1741]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:17.115772 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:51938.service - OpenSSH per-connection server daemon (10.0.0.1:51938). Oct 9 07:22:17.116327 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:51932.service: Deactivated successfully. Oct 9 07:22:17.118651 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:22:17.119419 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:22:17.120961 systemd-logind[1575]: Removed session 5. Oct 9 07:22:17.144515 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 51938 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:22:17.146198 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:17.150552 systemd-logind[1575]: New session 6 of user core. Oct 9 07:22:17.164965 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:22:17.220134 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:22:17.220499 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:22:17.224196 sudo[1758]: pam_unix(sudo:session): session closed for user root Oct 9 07:22:17.230722 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:22:17.231043 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:22:17.246662 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:22:17.248701 auditctl[1761]: No rules Oct 9 07:22:17.250059 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:22:17.250398 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:22:17.252402 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:22:17.282441 augenrules[1780]: No rules Oct 9 07:22:17.284523 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:22:17.285908 sudo[1757]: pam_unix(sudo:session): session closed for user root Oct 9 07:22:17.287646 sshd[1750]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:17.300693 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:51950.service - OpenSSH per-connection server daemon (10.0.0.1:51950). Oct 9 07:22:17.301188 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:51938.service: Deactivated successfully. Oct 9 07:22:17.303726 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:22:17.304880 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:22:17.306150 systemd-logind[1575]: Removed session 6. Oct 9 07:22:17.328633 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 51950 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:22:17.330062 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:17.334285 systemd-logind[1575]: New session 7 of user core. Oct 9 07:22:17.343779 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:22:17.397717 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:22:17.398032 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:22:17.540673 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:22:17.541029 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:22:17.979408 dockerd[1803]: time="2024-10-09T07:22:17.979255972Z" level=info msg="Starting up" Oct 9 07:22:18.686012 dockerd[1803]: time="2024-10-09T07:22:18.685950898Z" level=info msg="Loading containers: start." Oct 9 07:22:18.825484 kernel: Initializing XFRM netlink socket Oct 9 07:22:18.942658 systemd-networkd[1251]: docker0: Link UP Oct 9 07:22:18.965495 dockerd[1803]: time="2024-10-09T07:22:18.965430822Z" level=info msg="Loading containers: done." Oct 9 07:22:19.036573 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2383152117-merged.mount: Deactivated successfully. Oct 9 07:22:19.038205 dockerd[1803]: time="2024-10-09T07:22:19.038152622Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:22:19.038679 dockerd[1803]: time="2024-10-09T07:22:19.038391319Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:22:19.038679 dockerd[1803]: time="2024-10-09T07:22:19.038550127Z" level=info msg="Daemon has completed initialization" Oct 9 07:22:19.070794 dockerd[1803]: time="2024-10-09T07:22:19.070736548Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:22:19.071010 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:22:20.055911 containerd[1602]: time="2024-10-09T07:22:20.055861639Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 07:22:20.681355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394410635.mount: Deactivated successfully. Oct 9 07:22:21.956990 containerd[1602]: time="2024-10-09T07:22:21.956911576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:21.958162 containerd[1602]: time="2024-10-09T07:22:21.958104313Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 9 07:22:21.959687 containerd[1602]: time="2024-10-09T07:22:21.959629093Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:21.963379 containerd[1602]: time="2024-10-09T07:22:21.963340032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:21.964511 containerd[1602]: time="2024-10-09T07:22:21.964444584Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 1.908529574s" Oct 9 07:22:21.964511 containerd[1602]: time="2024-10-09T07:22:21.964508453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 07:22:21.990550 containerd[1602]: time="2024-10-09T07:22:21.990483305Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 07:22:23.895946 containerd[1602]: time="2024-10-09T07:22:23.895870567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:23.896938 containerd[1602]: time="2024-10-09T07:22:23.896872416Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 9 07:22:23.898399 containerd[1602]: time="2024-10-09T07:22:23.898348634Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:23.901096 containerd[1602]: time="2024-10-09T07:22:23.901067383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:23.902057 containerd[1602]: time="2024-10-09T07:22:23.902006424Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 1.911327672s" Oct 9 07:22:23.902057 containerd[1602]: time="2024-10-09T07:22:23.902045547Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 07:22:23.925949 containerd[1602]: time="2024-10-09T07:22:23.925906294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 07:22:25.192230 containerd[1602]: time="2024-10-09T07:22:25.192148561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:25.193120 containerd[1602]: time="2024-10-09T07:22:25.193044631Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 9 07:22:25.194516 containerd[1602]: time="2024-10-09T07:22:25.194477528Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:25.197603 containerd[1602]: time="2024-10-09T07:22:25.197558316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:25.199129 containerd[1602]: time="2024-10-09T07:22:25.199080290Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.27313349s" Oct 9 07:22:25.199176 containerd[1602]: time="2024-10-09T07:22:25.199134422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 07:22:25.222052 containerd[1602]: time="2024-10-09T07:22:25.222012987Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 07:22:25.296817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:22:25.310598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:22:25.474286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:22:25.480154 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:22:25.703939 kubelet[2039]: E1009 07:22:25.703815 2039 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:22:25.711352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:22:25.711706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:22:26.646127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount836732222.mount: Deactivated successfully. Oct 9 07:22:27.309694 containerd[1602]: time="2024-10-09T07:22:27.309613423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:27.310506 containerd[1602]: time="2024-10-09T07:22:27.310438991Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 9 07:22:27.311765 containerd[1602]: time="2024-10-09T07:22:27.311719433Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:27.315687 containerd[1602]: time="2024-10-09T07:22:27.315632541Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 2.093579359s" Oct 9 07:22:27.315687 containerd[1602]: time="2024-10-09T07:22:27.315675532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 07:22:27.316238 containerd[1602]: time="2024-10-09T07:22:27.316179126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:27.341907 containerd[1602]: time="2024-10-09T07:22:27.341857282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:22:28.071286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490542198.mount: Deactivated successfully. Oct 9 07:22:29.910614 containerd[1602]: time="2024-10-09T07:22:29.910548625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:29.911394 containerd[1602]: time="2024-10-09T07:22:29.911338216Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:22:29.912781 containerd[1602]: time="2024-10-09T07:22:29.912744523Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:29.918492 containerd[1602]: time="2024-10-09T07:22:29.918462496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:29.919537 containerd[1602]: time="2024-10-09T07:22:29.919491797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.577594761s" Oct 9 07:22:29.919537 containerd[1602]: time="2024-10-09T07:22:29.919530599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:22:29.942075 containerd[1602]: time="2024-10-09T07:22:29.942032008Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:22:30.487094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198731466.mount: Deactivated successfully. Oct 9 07:22:30.492378 containerd[1602]: time="2024-10-09T07:22:30.492339390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:30.493090 containerd[1602]: time="2024-10-09T07:22:30.493037630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 07:22:30.494244 containerd[1602]: time="2024-10-09T07:22:30.494222372Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:30.496713 containerd[1602]: time="2024-10-09T07:22:30.496675252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:30.497564 containerd[1602]: time="2024-10-09T07:22:30.497526608Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 555.453113ms" Oct 9 07:22:30.497601 containerd[1602]: time="2024-10-09T07:22:30.497565060Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:22:30.518935 containerd[1602]: time="2024-10-09T07:22:30.518899119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 07:22:31.047737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123248190.mount: Deactivated successfully. Oct 9 07:22:33.733904 containerd[1602]: time="2024-10-09T07:22:33.733841895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:33.734591 containerd[1602]: time="2024-10-09T07:22:33.734552006Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 9 07:22:33.735727 containerd[1602]: time="2024-10-09T07:22:33.735690211Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:33.738529 containerd[1602]: time="2024-10-09T07:22:33.738493608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:22:33.739635 containerd[1602]: time="2024-10-09T07:22:33.739602568Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.22066185s" Oct 9 07:22:33.739689 containerd[1602]: time="2024-10-09T07:22:33.739635910Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 07:22:35.796860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:22:35.810603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:22:35.959648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:22:35.963440 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:22:36.008495 kubelet[2255]: E1009 07:22:36.008392 2255 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:22:36.014496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:22:36.014862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:22:36.248168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:22:36.261666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:22:36.282877 systemd[1]: Reloading requested from client PID 2273 ('systemctl') (unit session-7.scope)... Oct 9 07:22:36.282892 systemd[1]: Reloading... Oct 9 07:22:36.373485 zram_generator::config[2313]: No configuration found. Oct 9 07:22:36.881658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:22:36.953082 systemd[1]: Reloading finished in 669 ms. Oct 9 07:22:37.000294 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:22:37.000408 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:22:37.000939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:22:37.018662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:22:37.158985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:22:37.165130 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:22:37.213366 kubelet[2370]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:22:37.213366 kubelet[2370]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:22:37.213366 kubelet[2370]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:22:37.214288 kubelet[2370]: I1009 07:22:37.214212 2370 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:22:37.433434 kubelet[2370]: I1009 07:22:37.433323 2370 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:22:37.433434 kubelet[2370]: I1009 07:22:37.433357 2370 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:22:37.433645 kubelet[2370]: I1009 07:22:37.433620 2370 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:22:37.450239 kubelet[2370]: E1009 07:22:37.450208 2370 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:37.451103 kubelet[2370]: I1009 07:22:37.451079 2370 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:22:37.467937 kubelet[2370]: I1009 07:22:37.467900 2370 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:22:37.468368 kubelet[2370]: I1009 07:22:37.468340 2370 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:22:37.468557 kubelet[2370]: I1009 07:22:37.468530 2370 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:22:37.468641 kubelet[2370]: I1009 07:22:37.468564 2370 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:22:37.468641 kubelet[2370]: I1009 07:22:37.468575 2370 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:22:37.468697 kubelet[2370]: I1009 07:22:37.468690 2370 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:22:37.468837 kubelet[2370]: I1009 07:22:37.468815 2370 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:22:37.468837 kubelet[2370]: I1009 07:22:37.468835 2370 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:22:37.468902 kubelet[2370]: I1009 07:22:37.468883 2370 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:22:37.469477 kubelet[2370]: I1009 07:22:37.468927 2370 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:22:37.469477 kubelet[2370]: W1009 07:22:37.469369 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:37.469477 kubelet[2370]: E1009 07:22:37.469423 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:37.469719 kubelet[2370]: W1009 07:22:37.469495 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:37.469719 kubelet[2370]: E1009 07:22:37.469543 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:37.469995 kubelet[2370]: I1009 07:22:37.469975 2370 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:22:37.472317 kubelet[2370]: I1009 07:22:37.472287 2370 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:22:37.473152 kubelet[2370]: W1009 07:22:37.473126 2370 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:22:37.473944 kubelet[2370]: I1009 07:22:37.473748 2370 server.go:1256] "Started kubelet" Oct 9 07:22:37.474090 kubelet[2370]: I1009 07:22:37.474047 2370 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:22:37.474883 kubelet[2370]: I1009 07:22:37.474519 2370 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:22:37.474883 kubelet[2370]: I1009 07:22:37.474577 2370 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:22:37.474883 kubelet[2370]: I1009 07:22:37.474883 2370 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:22:37.476161 kubelet[2370]: I1009 07:22:37.475492 2370 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:22:37.477809 kubelet[2370]: E1009 07:22:37.477299 2370 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:22:37.478061 kubelet[2370]: E1009 07:22:37.478030 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:22:37.478099 kubelet[2370]: I1009 07:22:37.478089 2370 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:22:37.478823 kubelet[2370]: I1009 07:22:37.478177 2370 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:22:37.478823 kubelet[2370]: I1009 07:22:37.478230 2370 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:22:37.478823 kubelet[2370]: E1009 07:22:37.478305 2370 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fcb7e677348b91 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 07:22:37.473721233 +0000 UTC m=+0.303961958,LastTimestamp:2024-10-09 07:22:37.473721233 +0000 UTC m=+0.303961958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 07:22:37.478823 kubelet[2370]: W1009 07:22:37.478668 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:37.478823 kubelet[2370]: E1009 07:22:37.478704 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:37.478823 kubelet[2370]: I1009 07:22:37.478739 2370 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:22:37.478823 kubelet[2370]: I1009 07:22:37.478810 2370 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:22:37.479405 kubelet[2370]: E1009 07:22:37.479344 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Oct 9 07:22:37.479964 kubelet[2370]: I1009 07:22:37.479932 2370 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:22:37.493583 kubelet[2370]: I1009 07:22:37.493542 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:22:37.494908 kubelet[2370]: I1009 07:22:37.494878 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:22:37.494908 kubelet[2370]: I1009 07:22:37.494910 2370 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:22:37.494980 kubelet[2370]: I1009 07:22:37.494927 2370 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:22:37.495014 kubelet[2370]: E1009 07:22:37.494993 2370 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:22:37.500105 kubelet[2370]: W1009 07:22:37.500077 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:37.500244 kubelet[2370]: E1009 07:22:37.500223 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:37.507077 kubelet[2370]: I1009 07:22:37.507052 2370 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:22:37.507077 kubelet[2370]: I1009 07:22:37.507076 2370 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:22:37.507159 kubelet[2370]: I1009 07:22:37.507094 2370 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:22:37.579279 kubelet[2370]: I1009 07:22:37.579232 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:22:37.579671 kubelet[2370]: E1009 07:22:37.579642 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 9 07:22:37.595857 kubelet[2370]: E1009 07:22:37.595813 2370 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:22:37.680493 kubelet[2370]: E1009 07:22:37.680447 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Oct 9 07:22:37.780826 kubelet[2370]: I1009 07:22:37.780698 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:22:37.781137 kubelet[2370]: E1009 07:22:37.781103 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 9 07:22:37.796205 kubelet[2370]: E1009 07:22:37.796164 2370 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:22:38.081992 kubelet[2370]: E1009 07:22:38.081942 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Oct 9 07:22:38.183434 kubelet[2370]: I1009 07:22:38.183405 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:22:38.183887 kubelet[2370]: E1009 07:22:38.183849 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 9 07:22:38.196960 kubelet[2370]: E1009 07:22:38.196903 2370 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:22:38.296650 kubelet[2370]: I1009 07:22:38.296595 2370 policy_none.go:49] "None policy: Start" Oct 9 07:22:38.297471 kubelet[2370]: I1009 07:22:38.297426 2370 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:22:38.297510 kubelet[2370]: I1009 07:22:38.297492 2370 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:22:38.351877 kubelet[2370]: I1009 07:22:38.351730 2370 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:22:38.352109 kubelet[2370]: I1009 07:22:38.352084 2370 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:22:38.354135 kubelet[2370]: E1009 07:22:38.354117 2370 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 07:22:38.480883 kubelet[2370]: W1009 07:22:38.480841 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:38.480932 kubelet[2370]: E1009 07:22:38.480890 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:38.624068 kubelet[2370]: W1009 07:22:38.623905 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:38.624068 kubelet[2370]: E1009 07:22:38.623984 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:38.883486 kubelet[2370]: E1009 07:22:38.883313 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="1.6s" Oct 9 07:22:38.883658 kubelet[2370]: W1009 07:22:38.883586 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:38.883690 kubelet[2370]: E1009 07:22:38.883669 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:38.977708 kubelet[2370]: W1009 07:22:38.977659 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:38.977708 kubelet[2370]: E1009 07:22:38.977704 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:38.986031 kubelet[2370]: I1009 07:22:38.985990 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:22:38.986272 kubelet[2370]: E1009 07:22:38.986248 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 9 07:22:38.997434 kubelet[2370]: I1009 07:22:38.997397 2370 topology_manager.go:215] "Topology Admit Handler" podUID="6efdb70e826cb6f3d45170b6b799b266" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 07:22:38.998216 kubelet[2370]: I1009 07:22:38.998164 2370 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 07:22:38.998966 kubelet[2370]: I1009 07:22:38.998935 2370 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 07:22:39.085715 kubelet[2370]: I1009 07:22:39.085645 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:39.085715 kubelet[2370]: I1009 07:22:39.085705 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:39.085850 kubelet[2370]: I1009 07:22:39.085731 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:39.085850 kubelet[2370]: I1009 07:22:39.085766 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6efdb70e826cb6f3d45170b6b799b266-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6efdb70e826cb6f3d45170b6b799b266\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:22:39.085941 kubelet[2370]: I1009 07:22:39.085865 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6efdb70e826cb6f3d45170b6b799b266-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6efdb70e826cb6f3d45170b6b799b266\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:22:39.085987 kubelet[2370]: I1009 07:22:39.085954 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:39.086027 kubelet[2370]: I1009 07:22:39.085996 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 07:22:39.086087 kubelet[2370]: I1009 07:22:39.086046 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6efdb70e826cb6f3d45170b6b799b266-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6efdb70e826cb6f3d45170b6b799b266\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:22:39.086126 kubelet[2370]: I1009 07:22:39.086112 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:39.302786 kubelet[2370]: E1009 07:22:39.302744 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:39.303521 containerd[1602]: time="2024-10-09T07:22:39.303472700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6efdb70e826cb6f3d45170b6b799b266,Namespace:kube-system,Attempt:0,}" Oct 9 07:22:39.304776 kubelet[2370]: E1009 07:22:39.304727 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:39.305074 kubelet[2370]: E1009 07:22:39.305048 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:39.305441 containerd[1602]: time="2024-10-09T07:22:39.305366272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 9 07:22:39.305545 containerd[1602]: time="2024-10-09T07:22:39.305499211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 9 07:22:39.579638 kubelet[2370]: E1009 07:22:39.579512 2370 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.95:6443: connect: connection refused Oct 9 07:22:39.824668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775796591.mount: Deactivated successfully. Oct 9 07:22:39.831648 containerd[1602]: time="2024-10-09T07:22:39.831533483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:22:39.833354 containerd[1602]: time="2024-10-09T07:22:39.833311628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:22:39.834438 containerd[1602]: time="2024-10-09T07:22:39.834381274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:22:39.835910 containerd[1602]: time="2024-10-09T07:22:39.835864164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:22:39.836960 containerd[1602]: time="2024-10-09T07:22:39.836891401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:22:39.836960 containerd[1602]: time="2024-10-09T07:22:39.836899727Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:22:39.837889 containerd[1602]: time="2024-10-09T07:22:39.837844679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:22:39.839611 containerd[1602]: time="2024-10-09T07:22:39.839571998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:22:39.842599 containerd[1602]: time="2024-10-09T07:22:39.842555653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.036826ms" Oct 9 07:22:39.843834 containerd[1602]: time="2024-10-09T07:22:39.843779769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.16963ms" Oct 9 07:22:39.846651 containerd[1602]: time="2024-10-09T07:22:39.846626057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.033962ms" Oct 9 07:22:40.092510 containerd[1602]: time="2024-10-09T07:22:40.091970656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:22:40.092510 containerd[1602]: time="2024-10-09T07:22:40.092103044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:22:40.092510 containerd[1602]: time="2024-10-09T07:22:40.092161684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:22:40.092510 containerd[1602]: time="2024-10-09T07:22:40.092203041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:22:40.095676 containerd[1602]: time="2024-10-09T07:22:40.095528057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:22:40.095676 containerd[1602]: time="2024-10-09T07:22:40.095641339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:22:40.095867 containerd[1602]: time="2024-10-09T07:22:40.095666236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:22:40.095867 containerd[1602]: time="2024-10-09T07:22:40.095680473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:22:40.101954 containerd[1602]: time="2024-10-09T07:22:40.101855714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:22:40.102060 containerd[1602]: time="2024-10-09T07:22:40.101927989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:22:40.102060 containerd[1602]: time="2024-10-09T07:22:40.101952445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:22:40.102060 containerd[1602]: time="2024-10-09T07:22:40.101963135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:22:40.180064 containerd[1602]: time="2024-10-09T07:22:40.179684621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1759ededcf75ac8308f542b09ed527fec16c160c07a936483618ce60a108b0a4\"" Oct 9 07:22:40.185694 kubelet[2370]: E1009 07:22:40.185662 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:40.188603 containerd[1602]: time="2024-10-09T07:22:40.188554334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6f3dabc362bbaafc76b14a55abc67a977c551adcdae57798f239220f937df0d\"" Oct 9 07:22:40.188603 containerd[1602]: time="2024-10-09T07:22:40.188588899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6efdb70e826cb6f3d45170b6b799b266,Namespace:kube-system,Attempt:0,} returns sandbox id \"61940a7a1cefe1930084257d9c64552f1b25246971605178c904f8439674ea61\"" Oct 9 07:22:40.189934 kubelet[2370]: E1009 07:22:40.189919 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:40.190305 kubelet[2370]: E1009 07:22:40.190088 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:40.190812 containerd[1602]: time="2024-10-09T07:22:40.190781832Z" level=info msg="CreateContainer within sandbox \"1759ededcf75ac8308f542b09ed527fec16c160c07a936483618ce60a108b0a4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:22:40.192355 containerd[1602]: time="2024-10-09T07:22:40.192325306Z" level=info msg="CreateContainer within sandbox \"61940a7a1cefe1930084257d9c64552f1b25246971605178c904f8439674ea61\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:22:40.192732 containerd[1602]: time="2024-10-09T07:22:40.192689890Z" level=info msg="CreateContainer within sandbox \"c6f3dabc362bbaafc76b14a55abc67a977c551adcdae57798f239220f937df0d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:22:40.217544 containerd[1602]: time="2024-10-09T07:22:40.217504326Z" level=info msg="CreateContainer within sandbox \"c6f3dabc362bbaafc76b14a55abc67a977c551adcdae57798f239220f937df0d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"81f890bc444df7466dbf53261b2f1e001109075baf7294f36e0e05804a9fc74b\"" Oct 9 07:22:40.218045 containerd[1602]: time="2024-10-09T07:22:40.218017108Z" level=info msg="StartContainer for \"81f890bc444df7466dbf53261b2f1e001109075baf7294f36e0e05804a9fc74b\"" Oct 9 07:22:40.220935 containerd[1602]: time="2024-10-09T07:22:40.220901908Z" level=info msg="CreateContainer within sandbox \"1759ededcf75ac8308f542b09ed527fec16c160c07a936483618ce60a108b0a4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f379a28c0106f2dec78374a9253253ecbcda211b371c2cf8a47dbecb3a87a467\"" Oct 9 07:22:40.221507 containerd[1602]: time="2024-10-09T07:22:40.221289545Z" level=info msg="StartContainer for \"f379a28c0106f2dec78374a9253253ecbcda211b371c2cf8a47dbecb3a87a467\"" Oct 9 07:22:40.222402 containerd[1602]: time="2024-10-09T07:22:40.222356305Z" level=info msg="CreateContainer within sandbox \"61940a7a1cefe1930084257d9c64552f1b25246971605178c904f8439674ea61\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7151f7a7fa570fcf676b69340cede0d5e3e013a19c280935f09560c1da64c721\"" Oct 9 07:22:40.222859 containerd[1602]: time="2024-10-09T07:22:40.222824263Z" level=info msg="StartContainer for \"7151f7a7fa570fcf676b69340cede0d5e3e013a19c280935f09560c1da64c721\"" Oct 9 07:22:40.295484 containerd[1602]: time="2024-10-09T07:22:40.295361706Z" level=info msg="StartContainer for \"81f890bc444df7466dbf53261b2f1e001109075baf7294f36e0e05804a9fc74b\" returns successfully" Oct 9 07:22:40.300015 containerd[1602]: time="2024-10-09T07:22:40.299920426Z" level=info msg="StartContainer for \"f379a28c0106f2dec78374a9253253ecbcda211b371c2cf8a47dbecb3a87a467\" returns successfully" Oct 9 07:22:40.300573 containerd[1602]: time="2024-10-09T07:22:40.300379727Z" level=info msg="StartContainer for \"7151f7a7fa570fcf676b69340cede0d5e3e013a19c280935f09560c1da64c721\" returns successfully" Oct 9 07:22:40.506845 kubelet[2370]: E1009 07:22:40.506690 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:40.509604 kubelet[2370]: E1009 07:22:40.509111 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:40.510770 kubelet[2370]: E1009 07:22:40.510722 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:40.587835 kubelet[2370]: I1009 07:22:40.587795 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:22:41.513678 kubelet[2370]: E1009 07:22:41.513613 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:41.793375 kubelet[2370]: E1009 07:22:41.793206 2370 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 07:22:41.881331 kubelet[2370]: I1009 07:22:41.881274 2370 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 07:22:42.473196 kubelet[2370]: I1009 07:22:42.473146 2370 apiserver.go:52] "Watching apiserver" Oct 9 07:22:42.479225 kubelet[2370]: I1009 07:22:42.479197 2370 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:22:44.268343 systemd[1]: Reloading requested from client PID 2647 ('systemctl') (unit session-7.scope)... Oct 9 07:22:44.268359 systemd[1]: Reloading... Oct 9 07:22:44.346716 zram_generator::config[2684]: No configuration found. Oct 9 07:22:44.462518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:22:44.542189 systemd[1]: Reloading finished in 273 ms. Oct 9 07:22:44.577702 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:22:44.598053 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:22:44.598583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:22:44.607870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:22:44.756960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:22:44.762638 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:22:44.813222 kubelet[2739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:22:44.813222 kubelet[2739]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:22:44.813222 kubelet[2739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:22:44.813786 kubelet[2739]: I1009 07:22:44.813273 2739 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:22:44.819117 kubelet[2739]: I1009 07:22:44.819069 2739 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:22:44.819117 kubelet[2739]: I1009 07:22:44.819109 2739 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:22:44.819402 kubelet[2739]: I1009 07:22:44.819375 2739 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:22:44.820925 kubelet[2739]: I1009 07:22:44.820902 2739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:22:44.823336 kubelet[2739]: I1009 07:22:44.822961 2739 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:22:44.833930 kubelet[2739]: I1009 07:22:44.833903 2739 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:22:44.834561 kubelet[2739]: I1009 07:22:44.834529 2739 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:22:44.834759 kubelet[2739]: I1009 07:22:44.834725 2739 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:22:44.834759 kubelet[2739]: I1009 07:22:44.834761 2739 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:22:44.834906 kubelet[2739]: I1009 07:22:44.834771 2739 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:22:44.834906 kubelet[2739]: I1009 07:22:44.834816 2739 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:22:44.834967 kubelet[2739]: I1009 07:22:44.834923 2739 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:22:44.834967 kubelet[2739]: I1009 07:22:44.834939 2739 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:22:44.834967 kubelet[2739]: I1009 07:22:44.834965 2739 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:22:44.835044 kubelet[2739]: I1009 07:22:44.834981 2739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:22:44.835575 kubelet[2739]: I1009 07:22:44.835544 2739 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:22:44.836304 kubelet[2739]: I1009 07:22:44.835776 2739 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:22:44.836304 kubelet[2739]: I1009 07:22:44.836269 2739 server.go:1256] "Started kubelet" Oct 9 07:22:44.837974 kubelet[2739]: I1009 07:22:44.837257 2739 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:22:44.838282 kubelet[2739]: I1009 07:22:44.838242 2739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:22:44.838594 kubelet[2739]: I1009 07:22:44.838562 2739 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:22:44.839887 kubelet[2739]: I1009 07:22:44.839864 2739 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:22:44.844371 kubelet[2739]: I1009 07:22:44.844345 2739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:22:44.846863 kubelet[2739]: I1009 07:22:44.846844 2739 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:22:44.847075 kubelet[2739]: I1009 07:22:44.847057 2739 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:22:44.847424 kubelet[2739]: I1009 07:22:44.847406 2739 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:22:44.852948 kubelet[2739]: I1009 07:22:44.852025 2739 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:22:44.852948 kubelet[2739]: I1009 07:22:44.852129 2739 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:22:44.855504 kubelet[2739]: E1009 07:22:44.854812 2739 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:22:44.855504 kubelet[2739]: I1009 07:22:44.855346 2739 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:22:44.861646 kubelet[2739]: I1009 07:22:44.861608 2739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:22:44.863066 kubelet[2739]: I1009 07:22:44.863041 2739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:22:44.863117 kubelet[2739]: I1009 07:22:44.863073 2739 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:22:44.863117 kubelet[2739]: I1009 07:22:44.863091 2739 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:22:44.863172 kubelet[2739]: E1009 07:22:44.863143 2739 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:22:44.909410 kubelet[2739]: I1009 07:22:44.909378 2739 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:22:44.909410 kubelet[2739]: I1009 07:22:44.909400 2739 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:22:44.909410 kubelet[2739]: I1009 07:22:44.909419 2739 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:22:44.909630 kubelet[2739]: I1009 07:22:44.909614 2739 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:22:44.909700 kubelet[2739]: I1009 07:22:44.909639 2739 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:22:44.909700 kubelet[2739]: I1009 07:22:44.909646 2739 policy_none.go:49] "None policy: Start" Oct 9 07:22:44.910346 kubelet[2739]: I1009 07:22:44.910324 2739 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:22:44.910346 kubelet[2739]: I1009 07:22:44.910356 2739 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:22:44.910577 kubelet[2739]: I1009 07:22:44.910555 2739 state_mem.go:75] "Updated machine memory state" Oct 9 07:22:44.912537 kubelet[2739]: I1009 07:22:44.912464 2739 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:22:44.915156 kubelet[2739]: I1009 07:22:44.914985 2739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:22:44.963612 kubelet[2739]: I1009 07:22:44.963561 2739 topology_manager.go:215] "Topology Admit Handler" podUID="6efdb70e826cb6f3d45170b6b799b266" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 07:22:44.963730 kubelet[2739]: I1009 07:22:44.963668 2739 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 07:22:44.963730 kubelet[2739]: I1009 07:22:44.963705 2739 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 07:22:45.020383 kubelet[2739]: I1009 07:22:45.020345 2739 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:22:45.026108 kubelet[2739]: I1009 07:22:45.026086 2739 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 07:22:45.026916 kubelet[2739]: I1009 07:22:45.026154 2739 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 07:22:45.048356 kubelet[2739]: I1009 07:22:45.048320 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6efdb70e826cb6f3d45170b6b799b266-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6efdb70e826cb6f3d45170b6b799b266\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:22:45.048765 kubelet[2739]: I1009 07:22:45.048523 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:45.048765 kubelet[2739]: I1009 07:22:45.048563 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:45.048765 kubelet[2739]: I1009 07:22:45.048604 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 07:22:45.048765 kubelet[2739]: I1009 07:22:45.048625 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:45.048765 kubelet[2739]: I1009 07:22:45.048642 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6efdb70e826cb6f3d45170b6b799b266-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6efdb70e826cb6f3d45170b6b799b266\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:22:45.048900 kubelet[2739]: I1009 07:22:45.048663 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6efdb70e826cb6f3d45170b6b799b266-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6efdb70e826cb6f3d45170b6b799b266\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:22:45.048900 kubelet[2739]: I1009 07:22:45.048682 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:45.048900 kubelet[2739]: I1009 07:22:45.048701 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:22:45.272733 kubelet[2739]: E1009 07:22:45.272512 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:45.272733 kubelet[2739]: E1009 07:22:45.272536 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:45.272874 kubelet[2739]: E1009 07:22:45.272837 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:45.837438 kubelet[2739]: I1009 07:22:45.837377 2739 apiserver.go:52] "Watching apiserver" Oct 9 07:22:45.849661 kubelet[2739]: I1009 07:22:45.849579 2739 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:22:45.876765 kubelet[2739]: E1009 07:22:45.876725 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:45.877819 kubelet[2739]: E1009 07:22:45.877796 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:45.884741 kubelet[2739]: E1009 07:22:45.884708 2739 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 07:22:45.885551 kubelet[2739]: E1009 07:22:45.885313 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:45.914891 kubelet[2739]: I1009 07:22:45.914848 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.914780337 podStartE2EDuration="1.914780337s" podCreationTimestamp="2024-10-09 07:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:22:45.914611773 +0000 UTC m=+1.147352814" watchObservedRunningTime="2024-10-09 07:22:45.914780337 +0000 UTC m=+1.147521378" Oct 9 07:22:45.915065 kubelet[2739]: I1009 07:22:45.914968 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.914950434 podStartE2EDuration="1.914950434s" podCreationTimestamp="2024-10-09 07:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:22:45.90756473 +0000 UTC m=+1.140305771" watchObservedRunningTime="2024-10-09 07:22:45.914950434 +0000 UTC m=+1.147691475" Oct 9 07:22:45.929294 kubelet[2739]: I1009 07:22:45.929247 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.929203805 podStartE2EDuration="1.929203805s" podCreationTimestamp="2024-10-09 07:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:22:45.921816188 +0000 UTC m=+1.154557229" watchObservedRunningTime="2024-10-09 07:22:45.929203805 +0000 UTC m=+1.161944836" Oct 9 07:22:46.877165 kubelet[2739]: E1009 07:22:46.877125 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:47.069325 kubelet[2739]: E1009 07:22:47.069282 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:47.878335 kubelet[2739]: E1009 07:22:47.878293 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:48.879685 kubelet[2739]: E1009 07:22:48.879639 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:49.055233 sudo[1793]: pam_unix(sudo:session): session closed for user root Oct 9 07:22:49.057003 sshd[1786]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:49.061213 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:51950.service: Deactivated successfully. Oct 9 07:22:49.063862 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:22:49.063936 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:22:49.064928 systemd-logind[1575]: Removed session 7. Oct 9 07:22:54.781189 kubelet[2739]: E1009 07:22:54.781146 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:54.887689 kubelet[2739]: E1009 07:22:54.887635 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:57.073004 kubelet[2739]: E1009 07:22:57.072964 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:57.449622 kubelet[2739]: E1009 07:22:57.449405 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:22:57.478807 update_engine[1577]: I1009 07:22:57.478735 1577 update_attempter.cc:509] Updating boot flags... Oct 9 07:22:57.510495 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2833) Oct 9 07:22:57.553543 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2834) Oct 9 07:22:57.596496 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2834) Oct 9 07:22:57.892032 kubelet[2739]: E1009 07:22:57.891986 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:00.026084 kubelet[2739]: I1009 07:23:00.026042 2739 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:23:00.026603 kubelet[2739]: I1009 07:23:00.026570 2739 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:23:00.026630 containerd[1602]: time="2024-10-09T07:23:00.026380608Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:23:00.947838 kubelet[2739]: I1009 07:23:00.947478 2739 topology_manager.go:215] "Topology Admit Handler" podUID="2db052a2-9cae-4e78-a512-ae7618d658cc" podNamespace="kube-system" podName="kube-proxy-kkf4v" Oct 9 07:23:01.047483 kubelet[2739]: I1009 07:23:01.047405 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2db052a2-9cae-4e78-a512-ae7618d658cc-lib-modules\") pod \"kube-proxy-kkf4v\" (UID: \"2db052a2-9cae-4e78-a512-ae7618d658cc\") " pod="kube-system/kube-proxy-kkf4v" Oct 9 07:23:01.047483 kubelet[2739]: I1009 07:23:01.047472 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2db052a2-9cae-4e78-a512-ae7618d658cc-kube-proxy\") pod \"kube-proxy-kkf4v\" (UID: \"2db052a2-9cae-4e78-a512-ae7618d658cc\") " pod="kube-system/kube-proxy-kkf4v" Oct 9 07:23:01.047483 kubelet[2739]: I1009 07:23:01.047497 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2db052a2-9cae-4e78-a512-ae7618d658cc-xtables-lock\") pod \"kube-proxy-kkf4v\" (UID: \"2db052a2-9cae-4e78-a512-ae7618d658cc\") " pod="kube-system/kube-proxy-kkf4v" Oct 9 07:23:01.048118 kubelet[2739]: I1009 07:23:01.047519 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngkzf\" (UniqueName: \"kubernetes.io/projected/2db052a2-9cae-4e78-a512-ae7618d658cc-kube-api-access-ngkzf\") pod \"kube-proxy-kkf4v\" (UID: \"2db052a2-9cae-4e78-a512-ae7618d658cc\") " pod="kube-system/kube-proxy-kkf4v" Oct 9 07:23:01.263353 kubelet[2739]: I1009 07:23:01.261524 2739 topology_manager.go:215] "Topology Admit Handler" podUID="a3ae3368-8f98-4385-b347-2695fe76ecef" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-42nlx" Oct 9 07:23:01.350577 kubelet[2739]: I1009 07:23:01.350527 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq2vz\" (UniqueName: \"kubernetes.io/projected/a3ae3368-8f98-4385-b347-2695fe76ecef-kube-api-access-mq2vz\") pod \"tigera-operator-5d56685c77-42nlx\" (UID: \"a3ae3368-8f98-4385-b347-2695fe76ecef\") " pod="tigera-operator/tigera-operator-5d56685c77-42nlx" Oct 9 07:23:01.350577 kubelet[2739]: I1009 07:23:01.350577 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3ae3368-8f98-4385-b347-2695fe76ecef-var-lib-calico\") pod \"tigera-operator-5d56685c77-42nlx\" (UID: \"a3ae3368-8f98-4385-b347-2695fe76ecef\") " pod="tigera-operator/tigera-operator-5d56685c77-42nlx" Oct 9 07:23:01.552867 kubelet[2739]: E1009 07:23:01.552833 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:01.553571 containerd[1602]: time="2024-10-09T07:23:01.553517064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkf4v,Uid:2db052a2-9cae-4e78-a512-ae7618d658cc,Namespace:kube-system,Attempt:0,}" Oct 9 07:23:01.570571 containerd[1602]: time="2024-10-09T07:23:01.570520731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-42nlx,Uid:a3ae3368-8f98-4385-b347-2695fe76ecef,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:23:01.589152 containerd[1602]: time="2024-10-09T07:23:01.589039095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:23:01.589152 containerd[1602]: time="2024-10-09T07:23:01.589095953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:01.589152 containerd[1602]: time="2024-10-09T07:23:01.589114407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:23:01.589152 containerd[1602]: time="2024-10-09T07:23:01.589125720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:01.602432 containerd[1602]: time="2024-10-09T07:23:01.602238684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:23:01.602432 containerd[1602]: time="2024-10-09T07:23:01.602308275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:01.602432 containerd[1602]: time="2024-10-09T07:23:01.602322292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:23:01.602432 containerd[1602]: time="2024-10-09T07:23:01.602331850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:01.632771 containerd[1602]: time="2024-10-09T07:23:01.632717622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkf4v,Uid:2db052a2-9cae-4e78-a512-ae7618d658cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"95cc6ab651d36a15280856d71fc00b06a4564674eddf49712d2ce40e3213dc27\"" Oct 9 07:23:01.633735 kubelet[2739]: E1009 07:23:01.633714 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:01.636224 containerd[1602]: time="2024-10-09T07:23:01.636192878Z" level=info msg="CreateContainer within sandbox \"95cc6ab651d36a15280856d71fc00b06a4564674eddf49712d2ce40e3213dc27\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:23:01.653195 containerd[1602]: time="2024-10-09T07:23:01.653141070Z" level=info msg="CreateContainer within sandbox \"95cc6ab651d36a15280856d71fc00b06a4564674eddf49712d2ce40e3213dc27\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d1a45acd56697272f7fd52ee507af1f3f72a289d5d37b2c242b1df9e4228861\"" Oct 9 07:23:01.654007 containerd[1602]: time="2024-10-09T07:23:01.653959239Z" level=info msg="StartContainer for \"1d1a45acd56697272f7fd52ee507af1f3f72a289d5d37b2c242b1df9e4228861\"" Oct 9 07:23:01.656519 containerd[1602]: time="2024-10-09T07:23:01.656400609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-42nlx,Uid:a3ae3368-8f98-4385-b347-2695fe76ecef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b3bf786747afd7557e992676c852c4b8093f54ac95547356254e95bfca6630a5\"" Oct 9 07:23:01.658430 containerd[1602]: time="2024-10-09T07:23:01.658382790Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:23:01.722185 containerd[1602]: time="2024-10-09T07:23:01.722130074Z" level=info msg="StartContainer for \"1d1a45acd56697272f7fd52ee507af1f3f72a289d5d37b2c242b1df9e4228861\" returns successfully" Oct 9 07:23:01.906640 kubelet[2739]: E1009 07:23:01.906520 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:03.092191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917498600.mount: Deactivated successfully. Oct 9 07:23:03.623375 containerd[1602]: time="2024-10-09T07:23:03.623311082Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:03.624031 containerd[1602]: time="2024-10-09T07:23:03.623935181Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136521" Oct 9 07:23:03.625097 containerd[1602]: time="2024-10-09T07:23:03.625054758Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:03.627495 containerd[1602]: time="2024-10-09T07:23:03.627469793Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:03.628068 containerd[1602]: time="2024-10-09T07:23:03.628031826Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.96960399s" Oct 9 07:23:03.628115 containerd[1602]: time="2024-10-09T07:23:03.628069085Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:23:03.629537 containerd[1602]: time="2024-10-09T07:23:03.629509699Z" level=info msg="CreateContainer within sandbox \"b3bf786747afd7557e992676c852c4b8093f54ac95547356254e95bfca6630a5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:23:03.641111 containerd[1602]: time="2024-10-09T07:23:03.641038282Z" level=info msg="CreateContainer within sandbox \"b3bf786747afd7557e992676c852c4b8093f54ac95547356254e95bfca6630a5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8d641f3712fd4d3822bde842e3719c0b4826305ef83a09c9dfc361d006c006e9\"" Oct 9 07:23:03.641605 containerd[1602]: time="2024-10-09T07:23:03.641563233Z" level=info msg="StartContainer for \"8d641f3712fd4d3822bde842e3719c0b4826305ef83a09c9dfc361d006c006e9\"" Oct 9 07:23:03.706932 containerd[1602]: time="2024-10-09T07:23:03.706769615Z" level=info msg="StartContainer for \"8d641f3712fd4d3822bde842e3719c0b4826305ef83a09c9dfc361d006c006e9\" returns successfully" Oct 9 07:23:03.927566 kubelet[2739]: I1009 07:23:03.927392 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kkf4v" podStartSLOduration=3.9273182110000002 podStartE2EDuration="3.927318211s" podCreationTimestamp="2024-10-09 07:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:23:01.914657549 +0000 UTC m=+17.147398590" watchObservedRunningTime="2024-10-09 07:23:03.927318211 +0000 UTC m=+19.160059252" Oct 9 07:23:03.928078 kubelet[2739]: I1009 07:23:03.927578 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-42nlx" podStartSLOduration=0.956849093 podStartE2EDuration="2.927547193s" podCreationTimestamp="2024-10-09 07:23:01 +0000 UTC" firstStartedPulling="2024-10-09 07:23:01.657612402 +0000 UTC m=+16.890353443" lastFinishedPulling="2024-10-09 07:23:03.628310512 +0000 UTC m=+18.861051543" observedRunningTime="2024-10-09 07:23:03.926952038 +0000 UTC m=+19.159693079" watchObservedRunningTime="2024-10-09 07:23:03.927547193 +0000 UTC m=+19.160288244" Oct 9 07:23:06.492513 kubelet[2739]: I1009 07:23:06.492402 2739 topology_manager.go:215] "Topology Admit Handler" podUID="c761c5de-6ab7-4168-887f-c5c455c2c933" podNamespace="calico-system" podName="calico-typha-764b9dcb45-wctk7" Oct 9 07:23:06.555625 kubelet[2739]: I1009 07:23:06.555561 2739 topology_manager.go:215] "Topology Admit Handler" podUID="910759d5-109b-4e87-a8c5-3796d180def4" podNamespace="calico-system" podName="calico-node-mc5wc" Oct 9 07:23:06.584501 kubelet[2739]: I1009 07:23:06.584436 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c761c5de-6ab7-4168-887f-c5c455c2c933-tigera-ca-bundle\") pod \"calico-typha-764b9dcb45-wctk7\" (UID: \"c761c5de-6ab7-4168-887f-c5c455c2c933\") " pod="calico-system/calico-typha-764b9dcb45-wctk7" Oct 9 07:23:06.584501 kubelet[2739]: I1009 07:23:06.584501 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c761c5de-6ab7-4168-887f-c5c455c2c933-typha-certs\") pod \"calico-typha-764b9dcb45-wctk7\" (UID: \"c761c5de-6ab7-4168-887f-c5c455c2c933\") " pod="calico-system/calico-typha-764b9dcb45-wctk7" Oct 9 07:23:06.584680 kubelet[2739]: I1009 07:23:06.584525 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96sx\" (UniqueName: \"kubernetes.io/projected/c761c5de-6ab7-4168-887f-c5c455c2c933-kube-api-access-g96sx\") pod \"calico-typha-764b9dcb45-wctk7\" (UID: \"c761c5de-6ab7-4168-887f-c5c455c2c933\") " pod="calico-system/calico-typha-764b9dcb45-wctk7" Oct 9 07:23:06.585517 kubelet[2739]: I1009 07:23:06.585489 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/910759d5-109b-4e87-a8c5-3796d180def4-var-lib-calico\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585597 kubelet[2739]: I1009 07:23:06.585565 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/910759d5-109b-4e87-a8c5-3796d180def4-cni-log-dir\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585818 kubelet[2739]: I1009 07:23:06.585627 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/910759d5-109b-4e87-a8c5-3796d180def4-lib-modules\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585818 kubelet[2739]: I1009 07:23:06.585722 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/910759d5-109b-4e87-a8c5-3796d180def4-var-run-calico\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585818 kubelet[2739]: I1009 07:23:06.585767 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/910759d5-109b-4e87-a8c5-3796d180def4-xtables-lock\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585818 kubelet[2739]: I1009 07:23:06.585788 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/910759d5-109b-4e87-a8c5-3796d180def4-policysync\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585818 kubelet[2739]: I1009 07:23:06.585808 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/910759d5-109b-4e87-a8c5-3796d180def4-tigera-ca-bundle\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585951 kubelet[2739]: I1009 07:23:06.585837 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/910759d5-109b-4e87-a8c5-3796d180def4-flexvol-driver-host\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585951 kubelet[2739]: I1009 07:23:06.585857 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/910759d5-109b-4e87-a8c5-3796d180def4-node-certs\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585951 kubelet[2739]: I1009 07:23:06.585876 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/910759d5-109b-4e87-a8c5-3796d180def4-cni-bin-dir\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585951 kubelet[2739]: I1009 07:23:06.585894 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/910759d5-109b-4e87-a8c5-3796d180def4-cni-net-dir\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.585951 kubelet[2739]: I1009 07:23:06.585912 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjlv8\" (UniqueName: \"kubernetes.io/projected/910759d5-109b-4e87-a8c5-3796d180def4-kube-api-access-xjlv8\") pod \"calico-node-mc5wc\" (UID: \"910759d5-109b-4e87-a8c5-3796d180def4\") " pod="calico-system/calico-node-mc5wc" Oct 9 07:23:06.691885 kubelet[2739]: I1009 07:23:06.691730 2739 topology_manager.go:215] "Topology Admit Handler" podUID="cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a" podNamespace="calico-system" podName="csi-node-driver-vpwqt" Oct 9 07:23:06.692922 kubelet[2739]: E1009 07:23:06.692684 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpwqt" podUID="cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a" Oct 9 07:23:06.699111 kubelet[2739]: E1009 07:23:06.694363 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.699111 kubelet[2739]: W1009 07:23:06.694392 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.699111 kubelet[2739]: E1009 07:23:06.694428 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.701368 kubelet[2739]: E1009 07:23:06.701322 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.701368 kubelet[2739]: W1009 07:23:06.701354 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.702964 kubelet[2739]: E1009 07:23:06.702926 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.706804 kubelet[2739]: E1009 07:23:06.706679 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.706804 kubelet[2739]: W1009 07:23:06.706699 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.706977 kubelet[2739]: E1009 07:23:06.706937 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.707404 kubelet[2739]: E1009 07:23:06.707392 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.707521 kubelet[2739]: W1009 07:23:06.707497 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.709463 kubelet[2739]: E1009 07:23:06.707781 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.709908 kubelet[2739]: E1009 07:23:06.709881 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.709908 kubelet[2739]: W1009 07:23:06.709904 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.710063 kubelet[2739]: E1009 07:23:06.710037 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.710892 kubelet[2739]: E1009 07:23:06.710864 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.712018 kubelet[2739]: W1009 07:23:06.711984 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.712018 kubelet[2739]: E1009 07:23:06.712023 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.714035 kubelet[2739]: E1009 07:23:06.713834 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.714035 kubelet[2739]: W1009 07:23:06.713850 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.714035 kubelet[2739]: E1009 07:23:06.713942 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.715103 kubelet[2739]: E1009 07:23:06.714694 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.715103 kubelet[2739]: W1009 07:23:06.714708 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.715103 kubelet[2739]: E1009 07:23:06.714722 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.715103 kubelet[2739]: E1009 07:23:06.714931 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.715103 kubelet[2739]: W1009 07:23:06.714940 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.715103 kubelet[2739]: E1009 07:23:06.714952 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.716033 kubelet[2739]: E1009 07:23:06.715824 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.716033 kubelet[2739]: W1009 07:23:06.715838 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.716033 kubelet[2739]: E1009 07:23:06.715850 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.771644 kubelet[2739]: E1009 07:23:06.771508 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.771644 kubelet[2739]: W1009 07:23:06.771537 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.771644 kubelet[2739]: E1009 07:23:06.771560 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.772015 kubelet[2739]: E1009 07:23:06.771958 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.772015 kubelet[2739]: W1009 07:23:06.771998 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.772208 kubelet[2739]: E1009 07:23:06.772046 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.772512 kubelet[2739]: E1009 07:23:06.772438 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.772512 kubelet[2739]: W1009 07:23:06.772475 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.772512 kubelet[2739]: E1009 07:23:06.772499 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.772719 kubelet[2739]: E1009 07:23:06.772698 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.772719 kubelet[2739]: W1009 07:23:06.772711 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.772719 kubelet[2739]: E1009 07:23:06.772722 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.772992 kubelet[2739]: E1009 07:23:06.772931 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.772992 kubelet[2739]: W1009 07:23:06.772948 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.772992 kubelet[2739]: E1009 07:23:06.772959 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.773142 kubelet[2739]: E1009 07:23:06.773136 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.773176 kubelet[2739]: W1009 07:23:06.773144 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.773176 kubelet[2739]: E1009 07:23:06.773155 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.773337 kubelet[2739]: E1009 07:23:06.773319 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.773337 kubelet[2739]: W1009 07:23:06.773332 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.773419 kubelet[2739]: E1009 07:23:06.773344 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.773624 kubelet[2739]: E1009 07:23:06.773535 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.773624 kubelet[2739]: W1009 07:23:06.773555 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.773624 kubelet[2739]: E1009 07:23:06.773566 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.774111 kubelet[2739]: E1009 07:23:06.774093 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.774111 kubelet[2739]: W1009 07:23:06.774107 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.774111 kubelet[2739]: E1009 07:23:06.774120 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.774321 kubelet[2739]: E1009 07:23:06.774306 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.774321 kubelet[2739]: W1009 07:23:06.774317 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.774389 kubelet[2739]: E1009 07:23:06.774327 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.774554 kubelet[2739]: E1009 07:23:06.774528 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.774554 kubelet[2739]: W1009 07:23:06.774550 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.774629 kubelet[2739]: E1009 07:23:06.774562 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.774782 kubelet[2739]: E1009 07:23:06.774765 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.774782 kubelet[2739]: W1009 07:23:06.774777 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.774845 kubelet[2739]: E1009 07:23:06.774789 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.775018 kubelet[2739]: E1009 07:23:06.775003 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.775018 kubelet[2739]: W1009 07:23:06.775014 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.775018 kubelet[2739]: E1009 07:23:06.775025 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.775222 kubelet[2739]: E1009 07:23:06.775207 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.775222 kubelet[2739]: W1009 07:23:06.775218 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.775290 kubelet[2739]: E1009 07:23:06.775229 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.775415 kubelet[2739]: E1009 07:23:06.775400 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.775415 kubelet[2739]: W1009 07:23:06.775411 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.775486 kubelet[2739]: E1009 07:23:06.775422 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.775656 kubelet[2739]: E1009 07:23:06.775640 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.775656 kubelet[2739]: W1009 07:23:06.775652 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.775730 kubelet[2739]: E1009 07:23:06.775662 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.775948 kubelet[2739]: E1009 07:23:06.775929 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.775948 kubelet[2739]: W1009 07:23:06.775944 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.776002 kubelet[2739]: E1009 07:23:06.775959 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.776220 kubelet[2739]: E1009 07:23:06.776193 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.776220 kubelet[2739]: W1009 07:23:06.776208 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.776220 kubelet[2739]: E1009 07:23:06.776220 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.776442 kubelet[2739]: E1009 07:23:06.776427 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.776442 kubelet[2739]: W1009 07:23:06.776439 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.776530 kubelet[2739]: E1009 07:23:06.776468 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.776722 kubelet[2739]: E1009 07:23:06.776704 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.776722 kubelet[2739]: W1009 07:23:06.776720 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.776911 kubelet[2739]: E1009 07:23:06.776735 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.788171 kubelet[2739]: E1009 07:23:06.788140 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.788171 kubelet[2739]: W1009 07:23:06.788165 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.788249 kubelet[2739]: E1009 07:23:06.788191 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.788249 kubelet[2739]: I1009 07:23:06.788227 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a-registration-dir\") pod \"csi-node-driver-vpwqt\" (UID: \"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a\") " pod="calico-system/csi-node-driver-vpwqt" Oct 9 07:23:06.788498 kubelet[2739]: E1009 07:23:06.788467 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.788498 kubelet[2739]: W1009 07:23:06.788484 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.788498 kubelet[2739]: E1009 07:23:06.788502 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.788737 kubelet[2739]: I1009 07:23:06.788655 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a-varrun\") pod \"csi-node-driver-vpwqt\" (UID: \"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a\") " pod="calico-system/csi-node-driver-vpwqt" Oct 9 07:23:06.788903 kubelet[2739]: E1009 07:23:06.788873 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.788903 kubelet[2739]: W1009 07:23:06.788892 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.789035 kubelet[2739]: E1009 07:23:06.788922 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.789211 kubelet[2739]: E1009 07:23:06.789195 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.789211 kubelet[2739]: W1009 07:23:06.789210 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.789279 kubelet[2739]: E1009 07:23:06.789231 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.789595 kubelet[2739]: E1009 07:23:06.789515 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.789595 kubelet[2739]: W1009 07:23:06.789532 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.789595 kubelet[2739]: E1009 07:23:06.789574 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.789721 kubelet[2739]: I1009 07:23:06.789603 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a-socket-dir\") pod \"csi-node-driver-vpwqt\" (UID: \"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a\") " pod="calico-system/csi-node-driver-vpwqt" Oct 9 07:23:06.789933 kubelet[2739]: E1009 07:23:06.789897 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.789933 kubelet[2739]: W1009 07:23:06.789910 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.789933 kubelet[2739]: E1009 07:23:06.789929 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.790015 kubelet[2739]: I1009 07:23:06.789948 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkpf8\" (UniqueName: \"kubernetes.io/projected/cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a-kube-api-access-nkpf8\") pod \"csi-node-driver-vpwqt\" (UID: \"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a\") " pod="calico-system/csi-node-driver-vpwqt" Oct 9 07:23:06.790199 kubelet[2739]: E1009 07:23:06.790173 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.790199 kubelet[2739]: W1009 07:23:06.790188 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.790288 kubelet[2739]: E1009 07:23:06.790243 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.790288 kubelet[2739]: I1009 07:23:06.790282 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a-kubelet-dir\") pod \"csi-node-driver-vpwqt\" (UID: \"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a\") " pod="calico-system/csi-node-driver-vpwqt" Oct 9 07:23:06.791094 kubelet[2739]: E1009 07:23:06.791043 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.791094 kubelet[2739]: W1009 07:23:06.791077 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.791188 kubelet[2739]: E1009 07:23:06.791130 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.791388 kubelet[2739]: E1009 07:23:06.791366 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.791388 kubelet[2739]: W1009 07:23:06.791385 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.791461 kubelet[2739]: E1009 07:23:06.791431 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.791703 kubelet[2739]: E1009 07:23:06.791686 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.791703 kubelet[2739]: W1009 07:23:06.791700 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.791837 kubelet[2739]: E1009 07:23:06.791774 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.791969 kubelet[2739]: E1009 07:23:06.791954 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.791969 kubelet[2739]: W1009 07:23:06.791967 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.792530 kubelet[2739]: E1009 07:23:06.792048 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.792530 kubelet[2739]: E1009 07:23:06.792217 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.792530 kubelet[2739]: W1009 07:23:06.792228 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.792530 kubelet[2739]: E1009 07:23:06.792247 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.792950 kubelet[2739]: E1009 07:23:06.792919 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.792950 kubelet[2739]: W1009 07:23:06.792933 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.792950 kubelet[2739]: E1009 07:23:06.792948 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.793200 kubelet[2739]: E1009 07:23:06.793154 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.793200 kubelet[2739]: W1009 07:23:06.793164 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.793200 kubelet[2739]: E1009 07:23:06.793177 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.793429 kubelet[2739]: E1009 07:23:06.793406 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.793429 kubelet[2739]: W1009 07:23:06.793422 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.793527 kubelet[2739]: E1009 07:23:06.793439 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.803618 kubelet[2739]: E1009 07:23:06.803578 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:06.804533 containerd[1602]: time="2024-10-09T07:23:06.804495245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764b9dcb45-wctk7,Uid:c761c5de-6ab7-4168-887f-c5c455c2c933,Namespace:calico-system,Attempt:0,}" Oct 9 07:23:06.833104 containerd[1602]: time="2024-10-09T07:23:06.832470524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:23:06.833104 containerd[1602]: time="2024-10-09T07:23:06.832536818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:06.833104 containerd[1602]: time="2024-10-09T07:23:06.832568679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:23:06.833104 containerd[1602]: time="2024-10-09T07:23:06.832582174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:06.867240 kubelet[2739]: E1009 07:23:06.867199 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:06.869339 containerd[1602]: time="2024-10-09T07:23:06.869168047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mc5wc,Uid:910759d5-109b-4e87-a8c5-3796d180def4,Namespace:calico-system,Attempt:0,}" Oct 9 07:23:06.891829 kubelet[2739]: E1009 07:23:06.891804 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.892001 kubelet[2739]: W1009 07:23:06.891951 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.892001 kubelet[2739]: E1009 07:23:06.891974 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.892430 kubelet[2739]: E1009 07:23:06.892418 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.892526 kubelet[2739]: W1009 07:23:06.892515 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.892605 kubelet[2739]: E1009 07:23:06.892594 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.893013 kubelet[2739]: E1009 07:23:06.892936 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.893013 kubelet[2739]: W1009 07:23:06.892946 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.893013 kubelet[2739]: E1009 07:23:06.892962 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.893433 kubelet[2739]: E1009 07:23:06.893399 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.893496 kubelet[2739]: W1009 07:23:06.893431 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.893534 kubelet[2739]: E1009 07:23:06.893497 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.895086 kubelet[2739]: E1009 07:23:06.895048 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.895086 kubelet[2739]: W1009 07:23:06.895065 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.895086 kubelet[2739]: E1009 07:23:06.895085 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.896704 kubelet[2739]: E1009 07:23:06.896569 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.896704 kubelet[2739]: W1009 07:23:06.896583 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.897182 kubelet[2739]: E1009 07:23:06.897147 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.897267 kubelet[2739]: W1009 07:23:06.897257 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.897499 kubelet[2739]: E1009 07:23:06.897263 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.897499 kubelet[2739]: E1009 07:23:06.897426 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.897755 kubelet[2739]: E1009 07:23:06.897709 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.897806 kubelet[2739]: W1009 07:23:06.897754 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.897856 kubelet[2739]: E1009 07:23:06.897837 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.898568 kubelet[2739]: E1009 07:23:06.898371 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.898568 kubelet[2739]: W1009 07:23:06.898390 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.898568 kubelet[2739]: E1009 07:23:06.898509 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.898830 kubelet[2739]: E1009 07:23:06.898816 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.898919 kubelet[2739]: W1009 07:23:06.898906 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.899019 kubelet[2739]: E1009 07:23:06.899006 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.899515 kubelet[2739]: E1009 07:23:06.899503 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.899577 kubelet[2739]: W1009 07:23:06.899561 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.899831 kubelet[2739]: E1009 07:23:06.899737 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.900745 kubelet[2739]: E1009 07:23:06.900732 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.900990 kubelet[2739]: W1009 07:23:06.900810 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.901189 kubelet[2739]: E1009 07:23:06.901056 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.901189 kubelet[2739]: E1009 07:23:06.901110 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.901189 kubelet[2739]: W1009 07:23:06.901134 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.902265 kubelet[2739]: E1009 07:23:06.901286 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.902265 kubelet[2739]: E1009 07:23:06.901438 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.902265 kubelet[2739]: W1009 07:23:06.901446 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.902265 kubelet[2739]: E1009 07:23:06.901548 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.902265 kubelet[2739]: E1009 07:23:06.901688 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.902265 kubelet[2739]: W1009 07:23:06.901695 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.902265 kubelet[2739]: E1009 07:23:06.901937 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.902265 kubelet[2739]: E1009 07:23:06.902207 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.902265 kubelet[2739]: W1009 07:23:06.902219 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.902672 kubelet[2739]: E1009 07:23:06.902380 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.902742 kubelet[2739]: E1009 07:23:06.902690 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.902742 kubelet[2739]: W1009 07:23:06.902701 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.902808 kubelet[2739]: E1009 07:23:06.902762 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.902979 containerd[1602]: time="2024-10-09T07:23:06.902923998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764b9dcb45-wctk7,Uid:c761c5de-6ab7-4168-887f-c5c455c2c933,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6b57414fd3ac02763e76bb8025d90d3484c5932212a8d1be9562cd3d2ba0625\"" Oct 9 07:23:06.904689 kubelet[2739]: E1009 07:23:06.904648 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.904689 kubelet[2739]: W1009 07:23:06.904663 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.904973 kubelet[2739]: E1009 07:23:06.904695 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.905147 kubelet[2739]: E1009 07:23:06.905123 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.905304 kubelet[2739]: W1009 07:23:06.905284 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.905370 kubelet[2739]: E1009 07:23:06.905316 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.905852 kubelet[2739]: E1009 07:23:06.905673 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.905852 kubelet[2739]: W1009 07:23:06.905687 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.905852 kubelet[2739]: E1009 07:23:06.905770 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.906073 kubelet[2739]: E1009 07:23:06.905991 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.906073 kubelet[2739]: W1009 07:23:06.905999 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.906073 kubelet[2739]: E1009 07:23:06.906011 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.906947 kubelet[2739]: E1009 07:23:06.906805 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.906947 kubelet[2739]: W1009 07:23:06.906817 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.906947 kubelet[2739]: E1009 07:23:06.906829 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.907666 kubelet[2739]: E1009 07:23:06.907602 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.907666 kubelet[2739]: W1009 07:23:06.907614 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.907666 kubelet[2739]: E1009 07:23:06.907626 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.909324 kubelet[2739]: E1009 07:23:06.908155 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:06.909654 kubelet[2739]: E1009 07:23:06.909622 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.910084 kubelet[2739]: W1009 07:23:06.909811 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.910084 kubelet[2739]: E1009 07:23:06.909915 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.910487 kubelet[2739]: E1009 07:23:06.910364 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.910487 kubelet[2739]: W1009 07:23:06.910376 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.910487 kubelet[2739]: E1009 07:23:06.910389 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.912491 containerd[1602]: time="2024-10-09T07:23:06.912262335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:23:06.912491 containerd[1602]: time="2024-10-09T07:23:06.912026420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:23:06.912491 containerd[1602]: time="2024-10-09T07:23:06.912098777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:06.912491 containerd[1602]: time="2024-10-09T07:23:06.912117061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:23:06.912491 containerd[1602]: time="2024-10-09T07:23:06.912129314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:06.921120 kubelet[2739]: E1009 07:23:06.921080 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:06.921120 kubelet[2739]: W1009 07:23:06.921102 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:06.921120 kubelet[2739]: E1009 07:23:06.921127 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:06.956122 containerd[1602]: time="2024-10-09T07:23:06.956077645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mc5wc,Uid:910759d5-109b-4e87-a8c5-3796d180def4,Namespace:calico-system,Attempt:0,} returns sandbox id \"758b40933c3efd8758b6bd8de0a31366e20cf4d6e92c6005a0c480d165a6781b\"" Oct 9 07:23:06.957119 kubelet[2739]: E1009 07:23:06.957095 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:08.704549 containerd[1602]: time="2024-10-09T07:23:08.704494623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:08.705280 containerd[1602]: time="2024-10-09T07:23:08.705213990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:23:08.706401 containerd[1602]: time="2024-10-09T07:23:08.706361925Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:08.708771 containerd[1602]: time="2024-10-09T07:23:08.708732476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:08.709513 containerd[1602]: time="2024-10-09T07:23:08.709475799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 1.797141728s" Oct 9 07:23:08.709581 containerd[1602]: time="2024-10-09T07:23:08.709520222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:23:08.710403 containerd[1602]: time="2024-10-09T07:23:08.710370836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:23:08.726051 containerd[1602]: time="2024-10-09T07:23:08.725818183Z" level=info msg="CreateContainer within sandbox \"d6b57414fd3ac02763e76bb8025d90d3484c5932212a8d1be9562cd3d2ba0625\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:23:08.744016 containerd[1602]: time="2024-10-09T07:23:08.743957007Z" level=info msg="CreateContainer within sandbox \"d6b57414fd3ac02763e76bb8025d90d3484c5932212a8d1be9562cd3d2ba0625\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"90852a961cbd09672e5038d4f98d496af55db03e73ab2f38eaeb797391c5c7dc\"" Oct 9 07:23:08.744673 containerd[1602]: time="2024-10-09T07:23:08.744553091Z" level=info msg="StartContainer for \"90852a961cbd09672e5038d4f98d496af55db03e73ab2f38eaeb797391c5c7dc\"" Oct 9 07:23:08.866567 kubelet[2739]: E1009 07:23:08.865255 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpwqt" podUID="cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a" Oct 9 07:23:09.055870 containerd[1602]: time="2024-10-09T07:23:09.055818941Z" level=info msg="StartContainer for \"90852a961cbd09672e5038d4f98d496af55db03e73ab2f38eaeb797391c5c7dc\" returns successfully" Oct 9 07:23:10.016976 containerd[1602]: time="2024-10-09T07:23:10.016915462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:10.017858 containerd[1602]: time="2024-10-09T07:23:10.017781955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:23:10.018997 containerd[1602]: time="2024-10-09T07:23:10.018928826Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:10.021754 containerd[1602]: time="2024-10-09T07:23:10.021681885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:10.022697 containerd[1602]: time="2024-10-09T07:23:10.022642717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.312236534s" Oct 9 07:23:10.022762 containerd[1602]: time="2024-10-09T07:23:10.022694534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:23:10.024739 containerd[1602]: time="2024-10-09T07:23:10.024709733Z" level=info msg="CreateContainer within sandbox \"758b40933c3efd8758b6bd8de0a31366e20cf4d6e92c6005a0c480d165a6781b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:23:10.039857 containerd[1602]: time="2024-10-09T07:23:10.039792011Z" level=info msg="CreateContainer within sandbox \"758b40933c3efd8758b6bd8de0a31366e20cf4d6e92c6005a0c480d165a6781b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"834056c3e4e72768551d4f61232db36e32a9ffcf3d9481e02da7af313830247b\"" Oct 9 07:23:10.040643 containerd[1602]: time="2024-10-09T07:23:10.040367616Z" level=info msg="StartContainer for \"834056c3e4e72768551d4f61232db36e32a9ffcf3d9481e02da7af313830247b\"" Oct 9 07:23:10.067766 kubelet[2739]: E1009 07:23:10.067590 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:10.084235 kubelet[2739]: I1009 07:23:10.084112 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-764b9dcb45-wctk7" podStartSLOduration=2.2838224289999998 podStartE2EDuration="4.084061233s" podCreationTimestamp="2024-10-09 07:23:06 +0000 UTC" firstStartedPulling="2024-10-09 07:23:06.909697143 +0000 UTC m=+22.142438174" lastFinishedPulling="2024-10-09 07:23:08.709935937 +0000 UTC m=+23.942676978" observedRunningTime="2024-10-09 07:23:10.083679203 +0000 UTC m=+25.316420274" watchObservedRunningTime="2024-10-09 07:23:10.084061233 +0000 UTC m=+25.316802274" Oct 9 07:23:10.099527 kubelet[2739]: E1009 07:23:10.099491 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.099527 kubelet[2739]: W1009 07:23:10.099546 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.099781 kubelet[2739]: E1009 07:23:10.099572 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.100068 kubelet[2739]: E1009 07:23:10.100051 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.100068 kubelet[2739]: W1009 07:23:10.100063 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.100068 kubelet[2739]: E1009 07:23:10.100076 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.100323 kubelet[2739]: E1009 07:23:10.100304 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.100323 kubelet[2739]: W1009 07:23:10.100316 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.100323 kubelet[2739]: E1009 07:23:10.100326 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.101097 kubelet[2739]: E1009 07:23:10.101062 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.101097 kubelet[2739]: W1009 07:23:10.101077 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.101097 kubelet[2739]: E1009 07:23:10.101090 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.103715 kubelet[2739]: E1009 07:23:10.103528 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.103715 kubelet[2739]: W1009 07:23:10.103554 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.103715 kubelet[2739]: E1009 07:23:10.103576 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.103887 kubelet[2739]: E1009 07:23:10.103869 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.103887 kubelet[2739]: W1009 07:23:10.103881 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.103974 kubelet[2739]: E1009 07:23:10.103892 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.104561 kubelet[2739]: E1009 07:23:10.104525 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.104561 kubelet[2739]: W1009 07:23:10.104544 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.104561 kubelet[2739]: E1009 07:23:10.104556 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.105420 kubelet[2739]: E1009 07:23:10.105236 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.105420 kubelet[2739]: W1009 07:23:10.105290 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.105420 kubelet[2739]: E1009 07:23:10.105304 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.106875 kubelet[2739]: E1009 07:23:10.106750 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.106875 kubelet[2739]: W1009 07:23:10.106867 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.106947 kubelet[2739]: E1009 07:23:10.106887 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.108301 kubelet[2739]: E1009 07:23:10.108283 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.108301 kubelet[2739]: W1009 07:23:10.108298 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.108395 kubelet[2739]: E1009 07:23:10.108312 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.108831 kubelet[2739]: E1009 07:23:10.108805 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.108958 kubelet[2739]: W1009 07:23:10.108886 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.108958 kubelet[2739]: E1009 07:23:10.108921 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.109683 kubelet[2739]: E1009 07:23:10.109417 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.109683 kubelet[2739]: W1009 07:23:10.109489 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.109683 kubelet[2739]: E1009 07:23:10.109531 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.110422 kubelet[2739]: E1009 07:23:10.110154 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.110422 kubelet[2739]: W1009 07:23:10.110172 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.110422 kubelet[2739]: E1009 07:23:10.110188 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.110422 kubelet[2739]: E1009 07:23:10.110414 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.110422 kubelet[2739]: W1009 07:23:10.110426 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.110597 kubelet[2739]: E1009 07:23:10.110440 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.110739 kubelet[2739]: E1009 07:23:10.110719 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.110739 kubelet[2739]: W1009 07:23:10.110738 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.110812 kubelet[2739]: E1009 07:23:10.110753 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.126256 containerd[1602]: time="2024-10-09T07:23:10.126207595Z" level=info msg="StartContainer for \"834056c3e4e72768551d4f61232db36e32a9ffcf3d9481e02da7af313830247b\" returns successfully" Oct 9 07:23:10.129391 kubelet[2739]: E1009 07:23:10.129361 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.129391 kubelet[2739]: W1009 07:23:10.129389 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.129516 kubelet[2739]: E1009 07:23:10.129415 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.129898 kubelet[2739]: E1009 07:23:10.129868 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.129898 kubelet[2739]: W1009 07:23:10.129890 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.129959 kubelet[2739]: E1009 07:23:10.129932 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.130813 kubelet[2739]: E1009 07:23:10.130785 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.130892 kubelet[2739]: W1009 07:23:10.130864 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.130946 kubelet[2739]: E1009 07:23:10.130909 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.131404 kubelet[2739]: E1009 07:23:10.131307 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.131404 kubelet[2739]: W1009 07:23:10.131329 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.131404 kubelet[2739]: E1009 07:23:10.131359 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.132057 kubelet[2739]: E1009 07:23:10.131761 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.132057 kubelet[2739]: W1009 07:23:10.131775 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.132057 kubelet[2739]: E1009 07:23:10.131794 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.132057 kubelet[2739]: E1009 07:23:10.132027 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.132057 kubelet[2739]: W1009 07:23:10.132035 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.132057 kubelet[2739]: E1009 07:23:10.132063 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.132436 kubelet[2739]: E1009 07:23:10.132277 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.132436 kubelet[2739]: W1009 07:23:10.132295 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.132436 kubelet[2739]: E1009 07:23:10.132352 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.132635 kubelet[2739]: E1009 07:23:10.132608 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.132635 kubelet[2739]: W1009 07:23:10.132619 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.132716 kubelet[2739]: E1009 07:23:10.132652 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.132851 kubelet[2739]: E1009 07:23:10.132831 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.132851 kubelet[2739]: W1009 07:23:10.132843 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.132942 kubelet[2739]: E1009 07:23:10.132875 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.133103 kubelet[2739]: E1009 07:23:10.133084 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.133103 kubelet[2739]: W1009 07:23:10.133096 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.133183 kubelet[2739]: E1009 07:23:10.133114 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.133418 kubelet[2739]: E1009 07:23:10.133396 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.133418 kubelet[2739]: W1009 07:23:10.133409 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.133559 kubelet[2739]: E1009 07:23:10.133427 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.133705 kubelet[2739]: E1009 07:23:10.133684 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.133705 kubelet[2739]: W1009 07:23:10.133697 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.133789 kubelet[2739]: E1009 07:23:10.133715 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.134031 kubelet[2739]: E1009 07:23:10.134010 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.134031 kubelet[2739]: W1009 07:23:10.134022 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.134113 kubelet[2739]: E1009 07:23:10.134041 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.134485 kubelet[2739]: E1009 07:23:10.134444 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.134485 kubelet[2739]: W1009 07:23:10.134479 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.134602 kubelet[2739]: E1009 07:23:10.134522 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.134704 kubelet[2739]: E1009 07:23:10.134689 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.134704 kubelet[2739]: W1009 07:23:10.134701 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.134765 kubelet[2739]: E1009 07:23:10.134738 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.134932 kubelet[2739]: E1009 07:23:10.134915 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.134932 kubelet[2739]: W1009 07:23:10.134929 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.134986 kubelet[2739]: E1009 07:23:10.134949 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.135203 kubelet[2739]: E1009 07:23:10.135187 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.135203 kubelet[2739]: W1009 07:23:10.135200 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.135261 kubelet[2739]: E1009 07:23:10.135211 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.135667 kubelet[2739]: E1009 07:23:10.135652 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:23:10.135667 kubelet[2739]: W1009 07:23:10.135664 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:23:10.135743 kubelet[2739]: E1009 07:23:10.135675 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:23:10.207660 containerd[1602]: time="2024-10-09T07:23:10.207584119Z" level=info msg="shim disconnected" id=834056c3e4e72768551d4f61232db36e32a9ffcf3d9481e02da7af313830247b namespace=k8s.io Oct 9 07:23:10.207660 containerd[1602]: time="2024-10-09T07:23:10.207654392Z" level=warning msg="cleaning up after shim disconnected" id=834056c3e4e72768551d4f61232db36e32a9ffcf3d9481e02da7af313830247b namespace=k8s.io Oct 9 07:23:10.207660 containerd[1602]: time="2024-10-09T07:23:10.207665172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:23:10.720288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-834056c3e4e72768551d4f61232db36e32a9ffcf3d9481e02da7af313830247b-rootfs.mount: Deactivated successfully. Oct 9 07:23:10.864903 kubelet[2739]: E1009 07:23:10.864838 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpwqt" podUID="cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a" Oct 9 07:23:11.069298 kubelet[2739]: I1009 07:23:11.069251 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:23:11.070203 kubelet[2739]: E1009 07:23:11.070178 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:11.070495 kubelet[2739]: E1009 07:23:11.070295 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:11.071596 containerd[1602]: time="2024-10-09T07:23:11.071542653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:23:12.864138 kubelet[2739]: E1009 07:23:12.863714 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpwqt" podUID="cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a" Oct 9 07:23:12.987692 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:34160.service - OpenSSH per-connection server daemon (10.0.0.1:34160). Oct 9 07:23:13.276147 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 34160 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:13.278613 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:13.283924 systemd-logind[1575]: New session 8 of user core. Oct 9 07:23:13.289760 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:23:13.430044 sshd[3460]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:13.436733 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:34160.service: Deactivated successfully. Oct 9 07:23:13.440158 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:23:13.440819 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:23:13.442436 systemd-logind[1575]: Removed session 8. Oct 9 07:23:14.401714 containerd[1602]: time="2024-10-09T07:23:14.401648289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:14.402404 containerd[1602]: time="2024-10-09T07:23:14.402351242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:23:14.403833 containerd[1602]: time="2024-10-09T07:23:14.403783208Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:14.406238 containerd[1602]: time="2024-10-09T07:23:14.406201531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:14.406871 containerd[1602]: time="2024-10-09T07:23:14.406835705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.335237877s" Oct 9 07:23:14.406906 containerd[1602]: time="2024-10-09T07:23:14.406869398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:23:14.408747 containerd[1602]: time="2024-10-09T07:23:14.408718089Z" level=info msg="CreateContainer within sandbox \"758b40933c3efd8758b6bd8de0a31366e20cf4d6e92c6005a0c480d165a6781b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:23:14.421504 containerd[1602]: time="2024-10-09T07:23:14.421443788Z" level=info msg="CreateContainer within sandbox \"758b40933c3efd8758b6bd8de0a31366e20cf4d6e92c6005a0c480d165a6781b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f42329a952e308b802acc5c7e97a51edf4ab482cfaef2d6326562de5e903fb1f\"" Oct 9 07:23:14.422163 containerd[1602]: time="2024-10-09T07:23:14.421907110Z" level=info msg="StartContainer for \"f42329a952e308b802acc5c7e97a51edf4ab482cfaef2d6326562de5e903fb1f\"" Oct 9 07:23:14.863919 kubelet[2739]: E1009 07:23:14.863881 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpwqt" podUID="cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a" Oct 9 07:23:15.033944 containerd[1602]: time="2024-10-09T07:23:15.033678284Z" level=info msg="StartContainer for \"f42329a952e308b802acc5c7e97a51edf4ab482cfaef2d6326562de5e903fb1f\" returns successfully" Oct 9 07:23:15.034858 kubelet[2739]: E1009 07:23:15.034833 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:15.079180 kubelet[2739]: E1009 07:23:15.079128 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:16.081495 kubelet[2739]: E1009 07:23:16.081352 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:16.116039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f42329a952e308b802acc5c7e97a51edf4ab482cfaef2d6326562de5e903fb1f-rootfs.mount: Deactivated successfully. Oct 9 07:23:16.119930 containerd[1602]: time="2024-10-09T07:23:16.119854721Z" level=info msg="shim disconnected" id=f42329a952e308b802acc5c7e97a51edf4ab482cfaef2d6326562de5e903fb1f namespace=k8s.io Oct 9 07:23:16.120289 containerd[1602]: time="2024-10-09T07:23:16.119930725Z" level=warning msg="cleaning up after shim disconnected" id=f42329a952e308b802acc5c7e97a51edf4ab482cfaef2d6326562de5e903fb1f namespace=k8s.io Oct 9 07:23:16.120289 containerd[1602]: time="2024-10-09T07:23:16.119940243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:23:16.143101 kubelet[2739]: I1009 07:23:16.143035 2739 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:23:16.163439 kubelet[2739]: I1009 07:23:16.163100 2739 topology_manager.go:215] "Topology Admit Handler" podUID="9a35da6f-9888-4ff0-a65e-ee4182fdf802" podNamespace="kube-system" podName="coredns-76f75df574-4rt2z" Oct 9 07:23:16.166958 kubelet[2739]: I1009 07:23:16.166709 2739 topology_manager.go:215] "Topology Admit Handler" podUID="7cb1bce7-bd6f-458f-8664-2b72f8e26245" podNamespace="calico-system" podName="calico-kube-controllers-7cb586c8b9-kflrc" Oct 9 07:23:16.166958 kubelet[2739]: I1009 07:23:16.166885 2739 topology_manager.go:215] "Topology Admit Handler" podUID="1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523" podNamespace="kube-system" podName="coredns-76f75df574-82p6t" Oct 9 07:23:16.274142 kubelet[2739]: I1009 07:23:16.274090 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523-config-volume\") pod \"coredns-76f75df574-82p6t\" (UID: \"1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523\") " pod="kube-system/coredns-76f75df574-82p6t" Oct 9 07:23:16.274142 kubelet[2739]: I1009 07:23:16.274150 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdm9\" (UniqueName: \"kubernetes.io/projected/1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523-kube-api-access-svdm9\") pod \"coredns-76f75df574-82p6t\" (UID: \"1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523\") " pod="kube-system/coredns-76f75df574-82p6t" Oct 9 07:23:16.274142 kubelet[2739]: I1009 07:23:16.274180 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a35da6f-9888-4ff0-a65e-ee4182fdf802-config-volume\") pod \"coredns-76f75df574-4rt2z\" (UID: \"9a35da6f-9888-4ff0-a65e-ee4182fdf802\") " pod="kube-system/coredns-76f75df574-4rt2z" Oct 9 07:23:16.274407 kubelet[2739]: I1009 07:23:16.274207 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzlnf\" (UniqueName: \"kubernetes.io/projected/7cb1bce7-bd6f-458f-8664-2b72f8e26245-kube-api-access-vzlnf\") pod \"calico-kube-controllers-7cb586c8b9-kflrc\" (UID: \"7cb1bce7-bd6f-458f-8664-2b72f8e26245\") " pod="calico-system/calico-kube-controllers-7cb586c8b9-kflrc" Oct 9 07:23:16.274407 kubelet[2739]: I1009 07:23:16.274383 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzmz9\" (UniqueName: \"kubernetes.io/projected/9a35da6f-9888-4ff0-a65e-ee4182fdf802-kube-api-access-vzmz9\") pod \"coredns-76f75df574-4rt2z\" (UID: \"9a35da6f-9888-4ff0-a65e-ee4182fdf802\") " pod="kube-system/coredns-76f75df574-4rt2z" Oct 9 07:23:16.274560 kubelet[2739]: I1009 07:23:16.274501 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cb1bce7-bd6f-458f-8664-2b72f8e26245-tigera-ca-bundle\") pod \"calico-kube-controllers-7cb586c8b9-kflrc\" (UID: \"7cb1bce7-bd6f-458f-8664-2b72f8e26245\") " pod="calico-system/calico-kube-controllers-7cb586c8b9-kflrc" Oct 9 07:23:16.470764 kubelet[2739]: E1009 07:23:16.470614 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:16.471953 containerd[1602]: time="2024-10-09T07:23:16.471834535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4rt2z,Uid:9a35da6f-9888-4ff0-a65e-ee4182fdf802,Namespace:kube-system,Attempt:0,}" Oct 9 07:23:16.479516 kubelet[2739]: E1009 07:23:16.479486 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:16.480434 containerd[1602]: time="2024-10-09T07:23:16.480388451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb586c8b9-kflrc,Uid:7cb1bce7-bd6f-458f-8664-2b72f8e26245,Namespace:calico-system,Attempt:0,}" Oct 9 07:23:16.480580 containerd[1602]: time="2024-10-09T07:23:16.480533134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-82p6t,Uid:1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523,Namespace:kube-system,Attempt:0,}" Oct 9 07:23:16.570062 containerd[1602]: time="2024-10-09T07:23:16.570000025Z" level=error msg="Failed to destroy network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.570699 containerd[1602]: time="2024-10-09T07:23:16.570626283Z" level=error msg="encountered an error cleaning up failed sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.570759 containerd[1602]: time="2024-10-09T07:23:16.570688119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb586c8b9-kflrc,Uid:7cb1bce7-bd6f-458f-8664-2b72f8e26245,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.570887 containerd[1602]: time="2024-10-09T07:23:16.570794740Z" level=error msg="Failed to destroy network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.571205 kubelet[2739]: E1009 07:23:16.571163 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.571321 kubelet[2739]: E1009 07:23:16.571268 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb586c8b9-kflrc" Oct 9 07:23:16.571321 kubelet[2739]: E1009 07:23:16.571297 2739 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb586c8b9-kflrc" Oct 9 07:23:16.571481 kubelet[2739]: E1009 07:23:16.571391 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb586c8b9-kflrc_calico-system(7cb1bce7-bd6f-458f-8664-2b72f8e26245)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb586c8b9-kflrc_calico-system(7cb1bce7-bd6f-458f-8664-2b72f8e26245)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb586c8b9-kflrc" podUID="7cb1bce7-bd6f-458f-8664-2b72f8e26245" Oct 9 07:23:16.571547 containerd[1602]: time="2024-10-09T07:23:16.571310090Z" level=error msg="encountered an error cleaning up failed sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.571547 containerd[1602]: time="2024-10-09T07:23:16.571376074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4rt2z,Uid:9a35da6f-9888-4ff0-a65e-ee4182fdf802,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.571728 kubelet[2739]: E1009 07:23:16.571692 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.572315 kubelet[2739]: E1009 07:23:16.571813 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4rt2z" Oct 9 07:23:16.572315 kubelet[2739]: E1009 07:23:16.571852 2739 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4rt2z" Oct 9 07:23:16.572315 kubelet[2739]: E1009 07:23:16.571927 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4rt2z_kube-system(9a35da6f-9888-4ff0-a65e-ee4182fdf802)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4rt2z_kube-system(9a35da6f-9888-4ff0-a65e-ee4182fdf802)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4rt2z" podUID="9a35da6f-9888-4ff0-a65e-ee4182fdf802" Oct 9 07:23:16.572976 containerd[1602]: time="2024-10-09T07:23:16.572927003Z" level=error msg="Failed to destroy network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.573398 containerd[1602]: time="2024-10-09T07:23:16.573354026Z" level=error msg="encountered an error cleaning up failed sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.573532 containerd[1602]: time="2024-10-09T07:23:16.573397478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-82p6t,Uid:1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.573633 kubelet[2739]: E1009 07:23:16.573591 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.573712 kubelet[2739]: E1009 07:23:16.573639 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-82p6t" Oct 9 07:23:16.573712 kubelet[2739]: E1009 07:23:16.573661 2739 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-82p6t" Oct 9 07:23:16.573798 kubelet[2739]: E1009 07:23:16.573722 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-82p6t_kube-system(1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-82p6t_kube-system(1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-82p6t" podUID="1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523" Oct 9 07:23:16.867049 containerd[1602]: time="2024-10-09T07:23:16.867000987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vpwqt,Uid:cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a,Namespace:calico-system,Attempt:0,}" Oct 9 07:23:16.924030 containerd[1602]: time="2024-10-09T07:23:16.923966666Z" level=error msg="Failed to destroy network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.924389 containerd[1602]: time="2024-10-09T07:23:16.924357863Z" level=error msg="encountered an error cleaning up failed sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.924443 containerd[1602]: time="2024-10-09T07:23:16.924406374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vpwqt,Uid:cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.924730 kubelet[2739]: E1009 07:23:16.924699 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:16.924797 kubelet[2739]: E1009 07:23:16.924765 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vpwqt" Oct 9 07:23:16.924797 kubelet[2739]: E1009 07:23:16.924788 2739 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vpwqt" Oct 9 07:23:16.924881 kubelet[2739]: E1009 07:23:16.924850 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vpwqt_calico-system(cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vpwqt_calico-system(cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vpwqt" podUID="cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a" Oct 9 07:23:17.084013 kubelet[2739]: I1009 07:23:17.083980 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:17.084695 containerd[1602]: time="2024-10-09T07:23:17.084664248Z" level=info msg="StopPodSandbox for \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\"" Oct 9 07:23:17.084909 containerd[1602]: time="2024-10-09T07:23:17.084882989Z" level=info msg="Ensure that sandbox 3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45 in task-service has been cleanup successfully" Oct 9 07:23:17.085501 kubelet[2739]: I1009 07:23:17.085469 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:17.087191 containerd[1602]: time="2024-10-09T07:23:17.087138732Z" level=info msg="StopPodSandbox for \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\"" Oct 9 07:23:17.087477 containerd[1602]: time="2024-10-09T07:23:17.087304204Z" level=info msg="Ensure that sandbox 85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e in task-service has been cleanup successfully" Oct 9 07:23:17.090874 kubelet[2739]: I1009 07:23:17.090175 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:17.090874 kubelet[2739]: E1009 07:23:17.090227 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:17.091054 containerd[1602]: time="2024-10-09T07:23:17.090617578Z" level=info msg="StopPodSandbox for \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\"" Oct 9 07:23:17.091054 containerd[1602]: time="2024-10-09T07:23:17.090822724Z" level=info msg="Ensure that sandbox e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1 in task-service has been cleanup successfully" Oct 9 07:23:17.091532 containerd[1602]: time="2024-10-09T07:23:17.091480442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:23:17.095036 kubelet[2739]: I1009 07:23:17.094985 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:17.095689 containerd[1602]: time="2024-10-09T07:23:17.095642622Z" level=info msg="StopPodSandbox for \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\"" Oct 9 07:23:17.095849 containerd[1602]: time="2024-10-09T07:23:17.095829774Z" level=info msg="Ensure that sandbox aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a in task-service has been cleanup successfully" Oct 9 07:23:17.119335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a-shm.mount: Deactivated successfully. Oct 9 07:23:17.122967 containerd[1602]: time="2024-10-09T07:23:17.122733402Z" level=error msg="StopPodSandbox for \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\" failed" error="failed to destroy network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:17.123886 kubelet[2739]: E1009 07:23:17.123026 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:17.123886 kubelet[2739]: E1009 07:23:17.123178 2739 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45"} Oct 9 07:23:17.123886 kubelet[2739]: E1009 07:23:17.123225 2739 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:23:17.123886 kubelet[2739]: E1009 07:23:17.123261 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-82p6t" podUID="1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523" Oct 9 07:23:17.129908 containerd[1602]: time="2024-10-09T07:23:17.129863777Z" level=error msg="StopPodSandbox for \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\" failed" error="failed to destroy network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:17.130393 kubelet[2739]: E1009 07:23:17.130245 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:17.130393 kubelet[2739]: E1009 07:23:17.130289 2739 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e"} Oct 9 07:23:17.130393 kubelet[2739]: E1009 07:23:17.130327 2739 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cb1bce7-bd6f-458f-8664-2b72f8e26245\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:23:17.130393 kubelet[2739]: E1009 07:23:17.130357 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cb1bce7-bd6f-458f-8664-2b72f8e26245\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb586c8b9-kflrc" podUID="7cb1bce7-bd6f-458f-8664-2b72f8e26245" Oct 9 07:23:17.140114 containerd[1602]: time="2024-10-09T07:23:17.140049731Z" level=error msg="StopPodSandbox for \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\" failed" error="failed to destroy network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:17.140404 kubelet[2739]: E1009 07:23:17.140361 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:17.140404 kubelet[2739]: E1009 07:23:17.140419 2739 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1"} Oct 9 07:23:17.140638 kubelet[2739]: E1009 07:23:17.140482 2739 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:23:17.140638 kubelet[2739]: E1009 07:23:17.140512 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vpwqt" podUID="cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a" Oct 9 07:23:17.142255 containerd[1602]: time="2024-10-09T07:23:17.142218040Z" level=error msg="StopPodSandbox for \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\" failed" error="failed to destroy network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:23:17.142441 kubelet[2739]: E1009 07:23:17.142423 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:17.142493 kubelet[2739]: E1009 07:23:17.142447 2739 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a"} Oct 9 07:23:17.142571 kubelet[2739]: E1009 07:23:17.142544 2739 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a35da6f-9888-4ff0-a65e-ee4182fdf802\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:23:17.142571 kubelet[2739]: E1009 07:23:17.142576 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a35da6f-9888-4ff0-a65e-ee4182fdf802\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4rt2z" podUID="9a35da6f-9888-4ff0-a65e-ee4182fdf802" Oct 9 07:23:18.440793 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:34174.service - OpenSSH per-connection server daemon (10.0.0.1:34174). Oct 9 07:23:18.479838 sshd[3791]: Accepted publickey for core from 10.0.0.1 port 34174 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:18.481861 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:18.486212 systemd-logind[1575]: New session 9 of user core. Oct 9 07:23:18.492808 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:23:18.613175 sshd[3791]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:18.618034 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:34174.service: Deactivated successfully. Oct 9 07:23:18.621297 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:23:18.622047 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:23:18.622919 systemd-logind[1575]: Removed session 9. Oct 9 07:23:21.004647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505022750.mount: Deactivated successfully. Oct 9 07:23:21.930263 containerd[1602]: time="2024-10-09T07:23:21.930184794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:21.931128 containerd[1602]: time="2024-10-09T07:23:21.931085167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:23:21.932547 containerd[1602]: time="2024-10-09T07:23:21.932474058Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:21.935126 containerd[1602]: time="2024-10-09T07:23:21.935078384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:21.935581 containerd[1602]: time="2024-10-09T07:23:21.935548939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.844035075s" Oct 9 07:23:21.935642 containerd[1602]: time="2024-10-09T07:23:21.935581470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:23:21.946928 containerd[1602]: time="2024-10-09T07:23:21.945919386Z" level=info msg="CreateContainer within sandbox \"758b40933c3efd8758b6bd8de0a31366e20cf4d6e92c6005a0c480d165a6781b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:23:21.965690 containerd[1602]: time="2024-10-09T07:23:21.965625955Z" level=info msg="CreateContainer within sandbox \"758b40933c3efd8758b6bd8de0a31366e20cf4d6e92c6005a0c480d165a6781b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fd84b9356905a22f1b9dd649dfacff1535a8c51398fddb15a8e13f4d50b41f13\"" Oct 9 07:23:21.966395 containerd[1602]: time="2024-10-09T07:23:21.966343344Z" level=info msg="StartContainer for \"fd84b9356905a22f1b9dd649dfacff1535a8c51398fddb15a8e13f4d50b41f13\"" Oct 9 07:23:22.063969 containerd[1602]: time="2024-10-09T07:23:22.063822861Z" level=info msg="StartContainer for \"fd84b9356905a22f1b9dd649dfacff1535a8c51398fddb15a8e13f4d50b41f13\" returns successfully" Oct 9 07:23:22.108219 kubelet[2739]: E1009 07:23:22.108169 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:22.122652 kubelet[2739]: I1009 07:23:22.122542 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-mc5wc" podStartSLOduration=1.144643204 podStartE2EDuration="16.122503525s" podCreationTimestamp="2024-10-09 07:23:06 +0000 UTC" firstStartedPulling="2024-10-09 07:23:06.958005103 +0000 UTC m=+22.190746144" lastFinishedPulling="2024-10-09 07:23:21.935865424 +0000 UTC m=+37.168606465" observedRunningTime="2024-10-09 07:23:22.122340779 +0000 UTC m=+37.355081820" watchObservedRunningTime="2024-10-09 07:23:22.122503525 +0000 UTC m=+37.355244566" Oct 9 07:23:22.151387 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:23:22.151589 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Oct 9 07:23:23.622744 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:47236.service - OpenSSH per-connection server daemon (10.0.0.1:47236). Oct 9 07:23:23.654671 sshd[3880]: Accepted publickey for core from 10.0.0.1 port 47236 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:23.656383 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:23.660535 systemd-logind[1575]: New session 10 of user core. Oct 9 07:23:23.664717 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:23:23.823784 sshd[3880]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:23.840888 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:47242.service - OpenSSH per-connection server daemon (10.0.0.1:47242). Oct 9 07:23:23.842442 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:47236.service: Deactivated successfully. Oct 9 07:23:23.850190 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:23:23.861369 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:23:23.864051 systemd-logind[1575]: Removed session 10. Oct 9 07:23:23.916732 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 47242 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:23.917272 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:23.922326 systemd-logind[1575]: New session 11 of user core. Oct 9 07:23:23.929798 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:23:24.007478 kernel: bpftool[4033]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:23:24.125241 sshd[3974]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:24.136255 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:47254.service - OpenSSH per-connection server daemon (10.0.0.1:47254). Oct 9 07:23:24.136835 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:47242.service: Deactivated successfully. Oct 9 07:23:24.146087 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:23:24.147104 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:23:24.150383 systemd-logind[1575]: Removed session 11. Oct 9 07:23:24.184439 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 47254 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:24.186186 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:24.191476 systemd-logind[1575]: New session 12 of user core. Oct 9 07:23:24.199815 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:23:24.316116 systemd-networkd[1251]: vxlan.calico: Link UP Oct 9 07:23:24.316131 systemd-networkd[1251]: vxlan.calico: Gained carrier Oct 9 07:23:24.346659 sshd[4036]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:24.360002 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:47254.service: Deactivated successfully. Oct 9 07:23:24.364281 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:23:24.364613 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:23:24.367411 systemd-logind[1575]: Removed session 12. Oct 9 07:23:25.088268 kubelet[2739]: I1009 07:23:25.088166 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:23:25.092724 kubelet[2739]: E1009 07:23:25.092698 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:25.584739 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL Oct 9 07:23:29.362098 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:47258.service - OpenSSH per-connection server daemon (10.0.0.1:47258). Oct 9 07:23:29.405600 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 47258 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:29.408707 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:29.414284 systemd-logind[1575]: New session 13 of user core. Oct 9 07:23:29.422122 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:23:29.587829 sshd[4180]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:29.595039 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:47258.service: Deactivated successfully. Oct 9 07:23:29.598644 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:23:29.598887 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:23:29.600763 systemd-logind[1575]: Removed session 13. Oct 9 07:23:29.865236 containerd[1602]: time="2024-10-09T07:23:29.865092791Z" level=info msg="StopPodSandbox for \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\"" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.922 [INFO][4220] k8s.go 608: Cleaning up netns ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.922 [INFO][4220] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" iface="eth0" netns="/var/run/netns/cni-b31ba57b-1b3f-1e70-ad6c-0099bfbe1b7b" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.922 [INFO][4220] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" iface="eth0" netns="/var/run/netns/cni-b31ba57b-1b3f-1e70-ad6c-0099bfbe1b7b" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.924 [INFO][4220] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" iface="eth0" netns="/var/run/netns/cni-b31ba57b-1b3f-1e70-ad6c-0099bfbe1b7b" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.924 [INFO][4220] k8s.go 615: Releasing IP address(es) ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.924 [INFO][4220] utils.go 188: Calico CNI releasing IP address ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.990 [INFO][4227] ipam_plugin.go 417: Releasing address using handleID ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" HandleID="k8s-pod-network.e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.990 [INFO][4227] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.990 [INFO][4227] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.997 [WARNING][4227] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" HandleID="k8s-pod-network.e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.997 [INFO][4227] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" HandleID="k8s-pod-network.e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:29.998 [INFO][4227] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:30.004274 containerd[1602]: 2024-10-09 07:23:30.001 [INFO][4220] k8s.go 621: Teardown processing complete. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:30.005354 containerd[1602]: time="2024-10-09T07:23:30.005182509Z" level=info msg="TearDown network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\" successfully" Oct 9 07:23:30.005354 containerd[1602]: time="2024-10-09T07:23:30.005215502Z" level=info msg="StopPodSandbox for \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\" returns successfully" Oct 9 07:23:30.008382 containerd[1602]: time="2024-10-09T07:23:30.008345460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vpwqt,Uid:cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a,Namespace:calico-system,Attempt:1,}" Oct 9 07:23:30.009865 systemd[1]: run-netns-cni\x2db31ba57b\x2d1b3f\x2d1e70\x2dad6c\x2d0099bfbe1b7b.mount: Deactivated successfully. Oct 9 07:23:30.136423 systemd-networkd[1251]: cali5a6fe7883e5: Link UP Oct 9 07:23:30.137162 systemd-networkd[1251]: cali5a6fe7883e5: Gained carrier Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.074 [INFO][4234] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vpwqt-eth0 csi-node-driver- calico-system cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a 823 0 2024-10-09 07:23:06 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-vpwqt eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali5a6fe7883e5 [] []}} ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Namespace="calico-system" Pod="csi-node-driver-vpwqt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vpwqt-" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.074 [INFO][4234] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Namespace="calico-system" Pod="csi-node-driver-vpwqt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.100 [INFO][4249] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" HandleID="k8s-pod-network.fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.109 [INFO][4249] ipam_plugin.go 270: Auto assigning IP ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" HandleID="k8s-pod-network.fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vpwqt", "timestamp":"2024-10-09 07:23:30.100759791 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.109 [INFO][4249] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.109 [INFO][4249] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.109 [INFO][4249] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.110 [INFO][4249] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" host="localhost" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.114 [INFO][4249] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.118 [INFO][4249] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.119 [INFO][4249] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.121 [INFO][4249] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.121 [INFO][4249] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" host="localhost" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.122 [INFO][4249] ipam.go 1685: Creating new handle: k8s-pod-network.fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.126 [INFO][4249] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" host="localhost" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.130 [INFO][4249] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" host="localhost" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.130 [INFO][4249] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" host="localhost" Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.130 [INFO][4249] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:30.151149 containerd[1602]: 2024-10-09 07:23:30.130 [INFO][4249] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" HandleID="k8s-pod-network.fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.151786 containerd[1602]: 2024-10-09 07:23:30.133 [INFO][4234] k8s.go 386: Populated endpoint ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Namespace="calico-system" Pod="csi-node-driver-vpwqt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vpwqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vpwqt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vpwqt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5a6fe7883e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:30.151786 containerd[1602]: 2024-10-09 07:23:30.134 [INFO][4234] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Namespace="calico-system" Pod="csi-node-driver-vpwqt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.151786 containerd[1602]: 2024-10-09 07:23:30.134 [INFO][4234] dataplane_linux.go 68: Setting the host side veth name to cali5a6fe7883e5 ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Namespace="calico-system" Pod="csi-node-driver-vpwqt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.151786 containerd[1602]: 2024-10-09 07:23:30.137 [INFO][4234] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Namespace="calico-system" Pod="csi-node-driver-vpwqt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.151786 containerd[1602]: 2024-10-09 07:23:30.138 [INFO][4234] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Namespace="calico-system" Pod="csi-node-driver-vpwqt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vpwqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vpwqt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e", Pod:"csi-node-driver-vpwqt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5a6fe7883e5", MAC:"82:ba:5b:c9:a0:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:30.151786 containerd[1602]: 2024-10-09 07:23:30.147 [INFO][4234] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e" Namespace="calico-system" Pod="csi-node-driver-vpwqt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:30.181834 containerd[1602]: time="2024-10-09T07:23:30.181551696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:23:30.181834 containerd[1602]: time="2024-10-09T07:23:30.181736484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:30.181834 containerd[1602]: time="2024-10-09T07:23:30.181770137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:23:30.181834 containerd[1602]: time="2024-10-09T07:23:30.181788752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:30.223238 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:23:30.244261 containerd[1602]: time="2024-10-09T07:23:30.244213184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vpwqt,Uid:cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a,Namespace:calico-system,Attempt:1,} returns sandbox id \"fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e\"" Oct 9 07:23:30.246733 containerd[1602]: time="2024-10-09T07:23:30.246384801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:23:30.864886 containerd[1602]: time="2024-10-09T07:23:30.864838052Z" level=info msg="StopPodSandbox for \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\"" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.908 [INFO][4330] k8s.go 608: Cleaning up netns ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.908 [INFO][4330] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" iface="eth0" netns="/var/run/netns/cni-74b72dff-604d-4be1-d6e3-b9798335ebbc" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.908 [INFO][4330] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" iface="eth0" netns="/var/run/netns/cni-74b72dff-604d-4be1-d6e3-b9798335ebbc" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.908 [INFO][4330] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" iface="eth0" netns="/var/run/netns/cni-74b72dff-604d-4be1-d6e3-b9798335ebbc" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.908 [INFO][4330] k8s.go 615: Releasing IP address(es) ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.908 [INFO][4330] utils.go 188: Calico CNI releasing IP address ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.930 [INFO][4337] ipam_plugin.go 417: Releasing address using handleID ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" HandleID="k8s-pod-network.3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.930 [INFO][4337] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.930 [INFO][4337] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.935 [WARNING][4337] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" HandleID="k8s-pod-network.3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.935 [INFO][4337] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" HandleID="k8s-pod-network.3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.936 [INFO][4337] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:30.940768 containerd[1602]: 2024-10-09 07:23:30.938 [INFO][4330] k8s.go 621: Teardown processing complete. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:30.941659 containerd[1602]: time="2024-10-09T07:23:30.940943536Z" level=info msg="TearDown network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\" successfully" Oct 9 07:23:30.941659 containerd[1602]: time="2024-10-09T07:23:30.940973352Z" level=info msg="StopPodSandbox for \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\" returns successfully" Oct 9 07:23:30.941727 kubelet[2739]: E1009 07:23:30.941393 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:30.942143 containerd[1602]: time="2024-10-09T07:23:30.941823228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-82p6t,Uid:1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523,Namespace:kube-system,Attempt:1,}" Oct 9 07:23:31.011971 systemd[1]: run-containerd-runc-k8s.io-fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e-runc.Z8zqGl.mount: Deactivated successfully. Oct 9 07:23:31.012798 systemd[1]: run-netns-cni\x2d74b72dff\x2d604d\x2d4be1\x2dd6e3\x2db9798335ebbc.mount: Deactivated successfully. Oct 9 07:23:31.050484 systemd-networkd[1251]: califee2f49e6d9: Link UP Oct 9 07:23:31.050994 systemd-networkd[1251]: califee2f49e6d9: Gained carrier Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:30.984 [INFO][4344] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--82p6t-eth0 coredns-76f75df574- kube-system 1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523 835 0 2024-10-09 07:23:01 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-82p6t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califee2f49e6d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Namespace="kube-system" Pod="coredns-76f75df574-82p6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--82p6t-" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:30.984 [INFO][4344] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Namespace="kube-system" Pod="coredns-76f75df574-82p6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.016 [INFO][4357] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" HandleID="k8s-pod-network.678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.025 [INFO][4357] ipam_plugin.go 270: Auto assigning IP ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" HandleID="k8s-pod-network.678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc780), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-82p6t", "timestamp":"2024-10-09 07:23:31.016557635 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.025 [INFO][4357] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.025 [INFO][4357] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.025 [INFO][4357] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.026 [INFO][4357] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" host="localhost" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.029 [INFO][4357] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.032 [INFO][4357] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.035 [INFO][4357] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.037 [INFO][4357] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.037 [INFO][4357] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" host="localhost" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.038 [INFO][4357] ipam.go 1685: Creating new handle: k8s-pod-network.678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65 Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.041 [INFO][4357] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" host="localhost" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.044 [INFO][4357] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" host="localhost" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.044 [INFO][4357] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" host="localhost" Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.045 [INFO][4357] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:31.063301 containerd[1602]: 2024-10-09 07:23:31.045 [INFO][4357] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" HandleID="k8s-pod-network.678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:31.064238 containerd[1602]: 2024-10-09 07:23:31.048 [INFO][4344] k8s.go 386: Populated endpoint ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Namespace="kube-system" Pod="coredns-76f75df574-82p6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--82p6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--82p6t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-82p6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califee2f49e6d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:31.064238 containerd[1602]: 2024-10-09 07:23:31.048 [INFO][4344] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Namespace="kube-system" Pod="coredns-76f75df574-82p6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:31.064238 containerd[1602]: 2024-10-09 07:23:31.048 [INFO][4344] dataplane_linux.go 68: Setting the host side veth name to califee2f49e6d9 ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Namespace="kube-system" Pod="coredns-76f75df574-82p6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:31.064238 containerd[1602]: 2024-10-09 07:23:31.051 [INFO][4344] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Namespace="kube-system" Pod="coredns-76f75df574-82p6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:31.064238 containerd[1602]: 2024-10-09 07:23:31.051 [INFO][4344] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Namespace="kube-system" Pod="coredns-76f75df574-82p6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--82p6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--82p6t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65", Pod:"coredns-76f75df574-82p6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califee2f49e6d9", MAC:"92:c2:ab:a1:db:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:31.064238 containerd[1602]: 2024-10-09 07:23:31.060 [INFO][4344] k8s.go 500: Wrote updated endpoint to datastore ContainerID="678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65" Namespace="kube-system" Pod="coredns-76f75df574-82p6t" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:31.089788 containerd[1602]: time="2024-10-09T07:23:31.089663714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:23:31.089788 containerd[1602]: time="2024-10-09T07:23:31.089732284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:31.089788 containerd[1602]: time="2024-10-09T07:23:31.089747823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:23:31.089788 containerd[1602]: time="2024-10-09T07:23:31.089758042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:31.123120 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:23:31.158716 containerd[1602]: time="2024-10-09T07:23:31.158655789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-82p6t,Uid:1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523,Namespace:kube-system,Attempt:1,} returns sandbox id \"678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65\"" Oct 9 07:23:31.159338 kubelet[2739]: E1009 07:23:31.159316 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:31.176250 containerd[1602]: time="2024-10-09T07:23:31.176206616Z" level=info msg="CreateContainer within sandbox \"678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:23:31.192439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89340267.mount: Deactivated successfully. Oct 9 07:23:31.194964 containerd[1602]: time="2024-10-09T07:23:31.194925787Z" level=info msg="CreateContainer within sandbox \"678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67f38d6ff12a3758aae3cac81c2eb258a512d092caeaf6a0f84a7900a247cb0b\"" Oct 9 07:23:31.195616 containerd[1602]: time="2024-10-09T07:23:31.195551692Z" level=info msg="StartContainer for \"67f38d6ff12a3758aae3cac81c2eb258a512d092caeaf6a0f84a7900a247cb0b\"" Oct 9 07:23:31.259316 containerd[1602]: time="2024-10-09T07:23:31.259271937Z" level=info msg="StartContainer for \"67f38d6ff12a3758aae3cac81c2eb258a512d092caeaf6a0f84a7900a247cb0b\" returns successfully" Oct 9 07:23:31.475564 systemd-networkd[1251]: cali5a6fe7883e5: Gained IPv6LL Oct 9 07:23:31.492288 containerd[1602]: time="2024-10-09T07:23:31.492229608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:31.493000 containerd[1602]: time="2024-10-09T07:23:31.492942045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:23:31.494180 containerd[1602]: time="2024-10-09T07:23:31.494150144Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:31.496143 containerd[1602]: time="2024-10-09T07:23:31.496109413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:31.496802 containerd[1602]: time="2024-10-09T07:23:31.496766497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.250345118s" Oct 9 07:23:31.496844 containerd[1602]: time="2024-10-09T07:23:31.496802425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:23:31.498574 containerd[1602]: time="2024-10-09T07:23:31.498530479Z" level=info msg="CreateContainer within sandbox \"fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:23:31.520626 containerd[1602]: time="2024-10-09T07:23:31.520587869Z" level=info msg="CreateContainer within sandbox \"fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"331792babb453d589e79ab5286514f2fd2bf6c09b86a861d7e845f1314908be3\"" Oct 9 07:23:31.521172 containerd[1602]: time="2024-10-09T07:23:31.521143593Z" level=info msg="StartContainer for \"331792babb453d589e79ab5286514f2fd2bf6c09b86a861d7e845f1314908be3\"" Oct 9 07:23:31.582210 containerd[1602]: time="2024-10-09T07:23:31.582168997Z" level=info msg="StartContainer for \"331792babb453d589e79ab5286514f2fd2bf6c09b86a861d7e845f1314908be3\" returns successfully" Oct 9 07:23:31.583678 containerd[1602]: time="2024-10-09T07:23:31.583631122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:23:31.864345 containerd[1602]: time="2024-10-09T07:23:31.864243350Z" level=info msg="StopPodSandbox for \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\"" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.903 [INFO][4512] k8s.go 608: Cleaning up netns ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.903 [INFO][4512] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" iface="eth0" netns="/var/run/netns/cni-49ab1528-57ba-a3b1-fc2f-8f921ffde6f9" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.903 [INFO][4512] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" iface="eth0" netns="/var/run/netns/cni-49ab1528-57ba-a3b1-fc2f-8f921ffde6f9" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.903 [INFO][4512] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" iface="eth0" netns="/var/run/netns/cni-49ab1528-57ba-a3b1-fc2f-8f921ffde6f9" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.903 [INFO][4512] k8s.go 615: Releasing IP address(es) ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.903 [INFO][4512] utils.go 188: Calico CNI releasing IP address ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.926 [INFO][4520] ipam_plugin.go 417: Releasing address using handleID ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" HandleID="k8s-pod-network.85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.926 [INFO][4520] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.926 [INFO][4520] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.931 [WARNING][4520] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" HandleID="k8s-pod-network.85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.931 [INFO][4520] ipam_plugin.go 445: Releasing address using workloadID ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" HandleID="k8s-pod-network.85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.932 [INFO][4520] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:31.937508 containerd[1602]: 2024-10-09 07:23:31.934 [INFO][4512] k8s.go 621: Teardown processing complete. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:31.937946 containerd[1602]: time="2024-10-09T07:23:31.937705728Z" level=info msg="TearDown network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\" successfully" Oct 9 07:23:31.937946 containerd[1602]: time="2024-10-09T07:23:31.937734662Z" level=info msg="StopPodSandbox for \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\" returns successfully" Oct 9 07:23:31.938576 containerd[1602]: time="2024-10-09T07:23:31.938532320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb586c8b9-kflrc,Uid:7cb1bce7-bd6f-458f-8664-2b72f8e26245,Namespace:calico-system,Attempt:1,}" Oct 9 07:23:32.013368 systemd[1]: run-netns-cni\x2d49ab1528\x2d57ba\x2da3b1\x2dfc2f\x2d8f921ffde6f9.mount: Deactivated successfully. Oct 9 07:23:32.042954 systemd-networkd[1251]: calidc60215b4fa: Link UP Oct 9 07:23:32.043426 systemd-networkd[1251]: calidc60215b4fa: Gained carrier Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:31.982 [INFO][4529] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0 calico-kube-controllers-7cb586c8b9- calico-system 7cb1bce7-bd6f-458f-8664-2b72f8e26245 849 0 2024-10-09 07:23:06 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cb586c8b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cb586c8b9-kflrc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidc60215b4fa [] []}} ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Namespace="calico-system" Pod="calico-kube-controllers-7cb586c8b9-kflrc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:31.982 [INFO][4529] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Namespace="calico-system" Pod="calico-kube-controllers-7cb586c8b9-kflrc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.007 [INFO][4543] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" HandleID="k8s-pod-network.0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.014 [INFO][4543] ipam_plugin.go 270: Auto assigning IP ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" HandleID="k8s-pod-network.0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00058fb40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cb586c8b9-kflrc", "timestamp":"2024-10-09 07:23:32.007844264 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.015 [INFO][4543] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.015 [INFO][4543] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.015 [INFO][4543] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.016 [INFO][4543] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" host="localhost" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.020 [INFO][4543] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.024 [INFO][4543] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.025 [INFO][4543] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.027 [INFO][4543] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.027 [INFO][4543] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" host="localhost" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.028 [INFO][4543] ipam.go 1685: Creating new handle: k8s-pod-network.0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.031 [INFO][4543] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" host="localhost" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.036 [INFO][4543] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" host="localhost" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.036 [INFO][4543] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" host="localhost" Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.036 [INFO][4543] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:32.055133 containerd[1602]: 2024-10-09 07:23:32.036 [INFO][4543] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" HandleID="k8s-pod-network.0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:32.056021 containerd[1602]: 2024-10-09 07:23:32.039 [INFO][4529] k8s.go 386: Populated endpoint ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Namespace="calico-system" Pod="calico-kube-controllers-7cb586c8b9-kflrc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0", GenerateName:"calico-kube-controllers-7cb586c8b9-", Namespace:"calico-system", SelfLink:"", UID:"7cb1bce7-bd6f-458f-8664-2b72f8e26245", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb586c8b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cb586c8b9-kflrc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc60215b4fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:32.056021 containerd[1602]: 2024-10-09 07:23:32.039 [INFO][4529] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Namespace="calico-system" Pod="calico-kube-controllers-7cb586c8b9-kflrc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:32.056021 containerd[1602]: 2024-10-09 07:23:32.039 [INFO][4529] dataplane_linux.go 68: Setting the host side veth name to calidc60215b4fa ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Namespace="calico-system" Pod="calico-kube-controllers-7cb586c8b9-kflrc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:32.056021 containerd[1602]: 2024-10-09 07:23:32.042 [INFO][4529] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Namespace="calico-system" Pod="calico-kube-controllers-7cb586c8b9-kflrc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:32.056021 containerd[1602]: 2024-10-09 07:23:32.042 [INFO][4529] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Namespace="calico-system" Pod="calico-kube-controllers-7cb586c8b9-kflrc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0", GenerateName:"calico-kube-controllers-7cb586c8b9-", Namespace:"calico-system", SelfLink:"", UID:"7cb1bce7-bd6f-458f-8664-2b72f8e26245", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb586c8b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f", Pod:"calico-kube-controllers-7cb586c8b9-kflrc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc60215b4fa", MAC:"de:50:05:64:b1:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:32.056021 containerd[1602]: 2024-10-09 07:23:32.052 [INFO][4529] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f" Namespace="calico-system" Pod="calico-kube-controllers-7cb586c8b9-kflrc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:32.075472 containerd[1602]: time="2024-10-09T07:23:32.075356859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:23:32.075564 containerd[1602]: time="2024-10-09T07:23:32.075442571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:32.075564 containerd[1602]: time="2024-10-09T07:23:32.075482315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:23:32.075564 containerd[1602]: time="2024-10-09T07:23:32.075499347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:32.103008 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:23:32.132505 containerd[1602]: time="2024-10-09T07:23:32.131334136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb586c8b9-kflrc,Uid:7cb1bce7-bd6f-458f-8664-2b72f8e26245,Namespace:calico-system,Attempt:1,} returns sandbox id \"0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f\"" Oct 9 07:23:32.135238 kubelet[2739]: E1009 07:23:32.135215 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:32.143068 kubelet[2739]: I1009 07:23:32.142999 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-82p6t" podStartSLOduration=31.142931251 podStartE2EDuration="31.142931251s" podCreationTimestamp="2024-10-09 07:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:23:32.142352955 +0000 UTC m=+47.375093996" watchObservedRunningTime="2024-10-09 07:23:32.142931251 +0000 UTC m=+47.375672292" Oct 9 07:23:32.864789 containerd[1602]: time="2024-10-09T07:23:32.864705557Z" level=info msg="StopPodSandbox for \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\"" Oct 9 07:23:32.883681 systemd-networkd[1251]: califee2f49e6d9: Gained IPv6LL Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.909 [INFO][4626] k8s.go 608: Cleaning up netns ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.909 [INFO][4626] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" iface="eth0" netns="/var/run/netns/cni-5bda7e50-87d6-c6da-849b-011fd3cdb081" Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.909 [INFO][4626] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" iface="eth0" netns="/var/run/netns/cni-5bda7e50-87d6-c6da-849b-011fd3cdb081" Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.909 [INFO][4626] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" iface="eth0" netns="/var/run/netns/cni-5bda7e50-87d6-c6da-849b-011fd3cdb081" Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.909 [INFO][4626] k8s.go 615: Releasing IP address(es) ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.909 [INFO][4626] utils.go 188: Calico CNI releasing IP address ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.934 [INFO][4634] ipam_plugin.go 417: Releasing address using handleID ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" HandleID="k8s-pod-network.aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.934 [INFO][4634] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.935 [INFO][4634] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.940 [WARNING][4634] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" HandleID="k8s-pod-network.aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.940 [INFO][4634] ipam_plugin.go 445: Releasing address using workloadID ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" HandleID="k8s-pod-network.aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.941 [INFO][4634] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:32.946770 containerd[1602]: 2024-10-09 07:23:32.944 [INFO][4626] k8s.go 621: Teardown processing complete. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:32.947563 containerd[1602]: time="2024-10-09T07:23:32.946981604Z" level=info msg="TearDown network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\" successfully" Oct 9 07:23:32.947563 containerd[1602]: time="2024-10-09T07:23:32.947011049Z" level=info msg="StopPodSandbox for \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\" returns successfully" Oct 9 07:23:32.947643 kubelet[2739]: E1009 07:23:32.947388 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:32.949524 containerd[1602]: time="2024-10-09T07:23:32.948221402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4rt2z,Uid:9a35da6f-9888-4ff0-a65e-ee4182fdf802,Namespace:kube-system,Attempt:1,}" Oct 9 07:23:32.951763 systemd[1]: run-netns-cni\x2d5bda7e50\x2d87d6\x2dc6da\x2d849b\x2d011fd3cdb081.mount: Deactivated successfully. Oct 9 07:23:33.139354 kubelet[2739]: E1009 07:23:33.139231 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:33.170539 systemd-networkd[1251]: cali066d0571529: Link UP Oct 9 07:23:33.171567 systemd-networkd[1251]: cali066d0571529: Gained carrier Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.110 [INFO][4646] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--4rt2z-eth0 coredns-76f75df574- kube-system 9a35da6f-9888-4ff0-a65e-ee4182fdf802 868 0 2024-10-09 07:23:01 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-4rt2z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali066d0571529 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Namespace="kube-system" Pod="coredns-76f75df574-4rt2z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4rt2z-" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.110 [INFO][4646] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Namespace="kube-system" Pod="coredns-76f75df574-4rt2z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.136 [INFO][4660] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" HandleID="k8s-pod-network.90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.144 [INFO][4660] ipam_plugin.go 270: Auto assigning IP ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" HandleID="k8s-pod-network.90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df7e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-4rt2z", "timestamp":"2024-10-09 07:23:33.136021066 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.145 [INFO][4660] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.145 [INFO][4660] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.145 [INFO][4660] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.146 [INFO][4660] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" host="localhost" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.150 [INFO][4660] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.153 [INFO][4660] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.154 [INFO][4660] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.156 [INFO][4660] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.156 [INFO][4660] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" host="localhost" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.157 [INFO][4660] ipam.go 1685: Creating new handle: k8s-pod-network.90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705 Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.160 [INFO][4660] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" host="localhost" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.165 [INFO][4660] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" host="localhost" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.165 [INFO][4660] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" host="localhost" Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.165 [INFO][4660] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:33.188645 containerd[1602]: 2024-10-09 07:23:33.165 [INFO][4660] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" HandleID="k8s-pod-network.90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:33.189529 containerd[1602]: 2024-10-09 07:23:33.168 [INFO][4646] k8s.go 386: Populated endpoint ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Namespace="kube-system" Pod="coredns-76f75df574-4rt2z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4rt2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4rt2z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9a35da6f-9888-4ff0-a65e-ee4182fdf802", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-4rt2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali066d0571529", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:33.189529 containerd[1602]: 2024-10-09 07:23:33.168 [INFO][4646] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Namespace="kube-system" Pod="coredns-76f75df574-4rt2z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:33.189529 containerd[1602]: 2024-10-09 07:23:33.168 [INFO][4646] dataplane_linux.go 68: Setting the host side veth name to cali066d0571529 ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Namespace="kube-system" Pod="coredns-76f75df574-4rt2z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:33.189529 containerd[1602]: 2024-10-09 07:23:33.170 [INFO][4646] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Namespace="kube-system" Pod="coredns-76f75df574-4rt2z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:33.189529 containerd[1602]: 2024-10-09 07:23:33.170 [INFO][4646] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Namespace="kube-system" Pod="coredns-76f75df574-4rt2z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4rt2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4rt2z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9a35da6f-9888-4ff0-a65e-ee4182fdf802", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705", Pod:"coredns-76f75df574-4rt2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali066d0571529", MAC:"62:37:36:ab:da:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:33.189529 containerd[1602]: 2024-10-09 07:23:33.178 [INFO][4646] k8s.go 500: Wrote updated endpoint to datastore ContainerID="90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705" Namespace="kube-system" Pod="coredns-76f75df574-4rt2z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:33.214193 containerd[1602]: time="2024-10-09T07:23:33.214097806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:23:33.214325 containerd[1602]: time="2024-10-09T07:23:33.214164361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:33.214325 containerd[1602]: time="2024-10-09T07:23:33.214179700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:23:33.214325 containerd[1602]: time="2024-10-09T07:23:33.214189137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:23:33.242962 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:23:33.248738 containerd[1602]: time="2024-10-09T07:23:33.248706625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:33.249987 containerd[1602]: time="2024-10-09T07:23:33.249903683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:23:33.251723 containerd[1602]: time="2024-10-09T07:23:33.251068669Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:33.253125 containerd[1602]: time="2024-10-09T07:23:33.253091037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:33.253938 containerd[1602]: time="2024-10-09T07:23:33.253903632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.670237132s" Oct 9 07:23:33.253938 containerd[1602]: time="2024-10-09T07:23:33.253933949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:23:33.255442 containerd[1602]: time="2024-10-09T07:23:33.255413788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:23:33.256648 containerd[1602]: time="2024-10-09T07:23:33.256523792Z" level=info msg="CreateContainer within sandbox \"fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:23:33.269378 containerd[1602]: time="2024-10-09T07:23:33.269334815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4rt2z,Uid:9a35da6f-9888-4ff0-a65e-ee4182fdf802,Namespace:kube-system,Attempt:1,} returns sandbox id \"90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705\"" Oct 9 07:23:33.270040 kubelet[2739]: E1009 07:23:33.270016 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:33.274532 containerd[1602]: time="2024-10-09T07:23:33.274476849Z" level=info msg="CreateContainer within sandbox \"90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:23:33.279121 containerd[1602]: time="2024-10-09T07:23:33.279077417Z" level=info msg="CreateContainer within sandbox \"fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fb4852b0fa42892702b296acc4870ff809dac0c64313f0092d4c346d48ea3cac\"" Oct 9 07:23:33.279671 containerd[1602]: time="2024-10-09T07:23:33.279621538Z" level=info msg="StartContainer for \"fb4852b0fa42892702b296acc4870ff809dac0c64313f0092d4c346d48ea3cac\"" Oct 9 07:23:33.290836 containerd[1602]: time="2024-10-09T07:23:33.290788334Z" level=info msg="CreateContainer within sandbox \"90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68e83a49c4ff393b407d57db12f4a2116f3e2494161cee3a1c9b341ba252d584\"" Oct 9 07:23:33.292423 containerd[1602]: time="2024-10-09T07:23:33.291445497Z" level=info msg="StartContainer for \"68e83a49c4ff393b407d57db12f4a2116f3e2494161cee3a1c9b341ba252d584\"" Oct 9 07:23:33.346471 containerd[1602]: time="2024-10-09T07:23:33.346401588Z" level=info msg="StartContainer for \"68e83a49c4ff393b407d57db12f4a2116f3e2494161cee3a1c9b341ba252d584\" returns successfully" Oct 9 07:23:33.361518 containerd[1602]: time="2024-10-09T07:23:33.360589196Z" level=info msg="StartContainer for \"fb4852b0fa42892702b296acc4870ff809dac0c64313f0092d4c346d48ea3cac\" returns successfully" Oct 9 07:23:33.584663 systemd-networkd[1251]: calidc60215b4fa: Gained IPv6LL Oct 9 07:23:33.944886 kubelet[2739]: I1009 07:23:33.944744 2739 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:23:33.945765 kubelet[2739]: I1009 07:23:33.945724 2739 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:23:34.143115 kubelet[2739]: E1009 07:23:34.143083 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:34.144638 kubelet[2739]: E1009 07:23:34.144548 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:34.165612 kubelet[2739]: I1009 07:23:34.165538 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4rt2z" podStartSLOduration=33.165493323 podStartE2EDuration="33.165493323s" podCreationTimestamp="2024-10-09 07:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:23:34.152159982 +0000 UTC m=+49.384901033" watchObservedRunningTime="2024-10-09 07:23:34.165493323 +0000 UTC m=+49.398234364" Oct 9 07:23:34.174049 kubelet[2739]: I1009 07:23:34.173600 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-vpwqt" podStartSLOduration=25.164866593 podStartE2EDuration="28.173551232s" podCreationTimestamp="2024-10-09 07:23:06 +0000 UTC" firstStartedPulling="2024-10-09 07:23:30.245725724 +0000 UTC m=+45.478466765" lastFinishedPulling="2024-10-09 07:23:33.254410363 +0000 UTC m=+48.487151404" observedRunningTime="2024-10-09 07:23:34.17325201 +0000 UTC m=+49.405993051" watchObservedRunningTime="2024-10-09 07:23:34.173551232 +0000 UTC m=+49.406292273" Oct 9 07:23:34.225611 systemd-networkd[1251]: cali066d0571529: Gained IPv6LL Oct 9 07:23:34.600694 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:57160.service - OpenSSH per-connection server daemon (10.0.0.1:57160). Oct 9 07:23:34.633024 sshd[4808]: Accepted publickey for core from 10.0.0.1 port 57160 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:34.635068 sshd[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:34.639609 systemd-logind[1575]: New session 14 of user core. Oct 9 07:23:34.643962 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:23:34.785231 sshd[4808]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:34.788882 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:23:34.790226 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:57160.service: Deactivated successfully. Oct 9 07:23:34.796714 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:23:34.798142 systemd-logind[1575]: Removed session 14. Oct 9 07:23:35.145583 kubelet[2739]: E1009 07:23:35.145548 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:35.296490 containerd[1602]: time="2024-10-09T07:23:35.296283509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:35.297422 containerd[1602]: time="2024-10-09T07:23:35.297098839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:23:35.298924 containerd[1602]: time="2024-10-09T07:23:35.298893629Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:35.301929 containerd[1602]: time="2024-10-09T07:23:35.301201171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:35.301988 containerd[1602]: time="2024-10-09T07:23:35.301919589Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.04647351s" Oct 9 07:23:35.301988 containerd[1602]: time="2024-10-09T07:23:35.301947762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:23:35.314760 containerd[1602]: time="2024-10-09T07:23:35.314708838Z" level=info msg="CreateContainer within sandbox \"0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:23:35.330017 containerd[1602]: time="2024-10-09T07:23:35.329965269Z" level=info msg="CreateContainer within sandbox \"0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7f95c04317666d5835c8be4891b5de6b053a856b5b7cc5b15a50ae80d73bd7c7\"" Oct 9 07:23:35.330414 containerd[1602]: time="2024-10-09T07:23:35.330384294Z" level=info msg="StartContainer for \"7f95c04317666d5835c8be4891b5de6b053a856b5b7cc5b15a50ae80d73bd7c7\"" Oct 9 07:23:35.402005 containerd[1602]: time="2024-10-09T07:23:35.401886282Z" level=info msg="StartContainer for \"7f95c04317666d5835c8be4891b5de6b053a856b5b7cc5b15a50ae80d73bd7c7\" returns successfully" Oct 9 07:23:36.149424 kubelet[2739]: E1009 07:23:36.149374 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:36.374925 kubelet[2739]: I1009 07:23:36.374873 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cb586c8b9-kflrc" podStartSLOduration=27.205309025 podStartE2EDuration="30.374830571s" podCreationTimestamp="2024-10-09 07:23:06 +0000 UTC" firstStartedPulling="2024-10-09 07:23:32.132670926 +0000 UTC m=+47.365411967" lastFinishedPulling="2024-10-09 07:23:35.302192472 +0000 UTC m=+50.534933513" observedRunningTime="2024-10-09 07:23:36.37407885 +0000 UTC m=+51.606819891" watchObservedRunningTime="2024-10-09 07:23:36.374830571 +0000 UTC m=+51.607571612" Oct 9 07:23:39.803833 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:57162.service - OpenSSH per-connection server daemon (10.0.0.1:57162). Oct 9 07:23:39.836420 sshd[4892]: Accepted publickey for core from 10.0.0.1 port 57162 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:39.838184 sshd[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:39.842443 systemd-logind[1575]: New session 15 of user core. Oct 9 07:23:39.848717 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:23:39.975752 sshd[4892]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:39.979786 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:57162.service: Deactivated successfully. Oct 9 07:23:39.982411 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:23:39.982633 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:23:39.983964 systemd-logind[1575]: Removed session 15. Oct 9 07:23:44.850841 containerd[1602]: time="2024-10-09T07:23:44.850798196Z" level=info msg="StopPodSandbox for \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\"" Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.887 [WARNING][4935] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vpwqt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e", Pod:"csi-node-driver-vpwqt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5a6fe7883e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.887 [INFO][4935] k8s.go 608: Cleaning up netns ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.887 [INFO][4935] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" iface="eth0" netns="" Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.887 [INFO][4935] k8s.go 615: Releasing IP address(es) ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.887 [INFO][4935] utils.go 188: Calico CNI releasing IP address ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.910 [INFO][4945] ipam_plugin.go 417: Releasing address using handleID ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" HandleID="k8s-pod-network.e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.911 [INFO][4945] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.911 [INFO][4945] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.917 [WARNING][4945] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" HandleID="k8s-pod-network.e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.917 [INFO][4945] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" HandleID="k8s-pod-network.e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.918 [INFO][4945] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:44.923715 containerd[1602]: 2024-10-09 07:23:44.920 [INFO][4935] k8s.go 621: Teardown processing complete. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:44.924411 containerd[1602]: time="2024-10-09T07:23:44.923770232Z" level=info msg="TearDown network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\" successfully" Oct 9 07:23:44.924411 containerd[1602]: time="2024-10-09T07:23:44.923800619Z" level=info msg="StopPodSandbox for \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\" returns successfully" Oct 9 07:23:44.924411 containerd[1602]: time="2024-10-09T07:23:44.924359838Z" level=info msg="RemovePodSandbox for \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\"" Oct 9 07:23:44.927326 containerd[1602]: time="2024-10-09T07:23:44.927284165Z" level=info msg="Forcibly stopping sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\"" Oct 9 07:23:44.986823 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:56620.service - OpenSSH per-connection server daemon (10.0.0.1:56620). Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.966 [WARNING][4968] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vpwqt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cf63fcc5-e5e4-484c-8ba3-9e724cc7a36a", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa5a56cf27649daeac97beda81d0b0a0d74aad5c88d207472e3862a108f6da9e", Pod:"csi-node-driver-vpwqt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5a6fe7883e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.967 [INFO][4968] k8s.go 608: Cleaning up netns ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.967 [INFO][4968] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" iface="eth0" netns="" Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.967 [INFO][4968] k8s.go 615: Releasing IP address(es) ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.967 [INFO][4968] utils.go 188: Calico CNI releasing IP address ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.990 [INFO][4975] ipam_plugin.go 417: Releasing address using handleID ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" HandleID="k8s-pod-network.e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.990 [INFO][4975] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.990 [INFO][4975] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.995 [WARNING][4975] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" HandleID="k8s-pod-network.e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.995 [INFO][4975] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" HandleID="k8s-pod-network.e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Workload="localhost-k8s-csi--node--driver--vpwqt-eth0" Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.997 [INFO][4975] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:45.002122 containerd[1602]: 2024-10-09 07:23:44.999 [INFO][4968] k8s.go 621: Teardown processing complete. ContainerID="e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1" Oct 9 07:23:45.002667 containerd[1602]: time="2024-10-09T07:23:45.002172337Z" level=info msg="TearDown network for sandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\" successfully" Oct 9 07:23:45.126298 containerd[1602]: time="2024-10-09T07:23:45.126173137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:23:45.132672 containerd[1602]: time="2024-10-09T07:23:45.132638638Z" level=info msg="RemovePodSandbox \"e9fdfa29034062e5528c779a16b957b5040479ebc9efb463b1903198251fe9b1\" returns successfully" Oct 9 07:23:45.133152 containerd[1602]: time="2024-10-09T07:23:45.133128687Z" level=info msg="StopPodSandbox for \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\"" Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.166 [WARNING][5000] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4rt2z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9a35da6f-9888-4ff0-a65e-ee4182fdf802", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705", Pod:"coredns-76f75df574-4rt2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali066d0571529", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.166 [INFO][5000] k8s.go 608: Cleaning up netns ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.166 [INFO][5000] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" iface="eth0" netns="" Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.166 [INFO][5000] k8s.go 615: Releasing IP address(es) ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.166 [INFO][5000] utils.go 188: Calico CNI releasing IP address ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.187 [INFO][5008] ipam_plugin.go 417: Releasing address using handleID ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" HandleID="k8s-pod-network.aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.187 [INFO][5008] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.187 [INFO][5008] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.191 [WARNING][5008] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" HandleID="k8s-pod-network.aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.191 [INFO][5008] ipam_plugin.go 445: Releasing address using workloadID ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" HandleID="k8s-pod-network.aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.192 [INFO][5008] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:45.197436 containerd[1602]: 2024-10-09 07:23:45.194 [INFO][5000] k8s.go 621: Teardown processing complete. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:45.197984 containerd[1602]: time="2024-10-09T07:23:45.197484051Z" level=info msg="TearDown network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\" successfully" Oct 9 07:23:45.197984 containerd[1602]: time="2024-10-09T07:23:45.197512124Z" level=info msg="StopPodSandbox for \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\" returns successfully" Oct 9 07:23:45.198100 containerd[1602]: time="2024-10-09T07:23:45.198064560Z" level=info msg="RemovePodSandbox for \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\"" Oct 9 07:23:45.198146 containerd[1602]: time="2024-10-09T07:23:45.198105817Z" level=info msg="Forcibly stopping sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\"" Oct 9 07:23:45.211019 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 56620 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:45.213058 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:45.217744 systemd-logind[1575]: New session 16 of user core. Oct 9 07:23:45.226840 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.236 [WARNING][5030] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4rt2z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9a35da6f-9888-4ff0-a65e-ee4182fdf802", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90092036d7133cc2b653c0fc70accb340525b42c09df5e1f3c30aeab06e3a705", Pod:"coredns-76f75df574-4rt2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali066d0571529", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.236 [INFO][5030] k8s.go 608: Cleaning up netns ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.236 [INFO][5030] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" iface="eth0" netns="" Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.236 [INFO][5030] k8s.go 615: Releasing IP address(es) ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.236 [INFO][5030] utils.go 188: Calico CNI releasing IP address ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.256 [INFO][5040] ipam_plugin.go 417: Releasing address using handleID ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" HandleID="k8s-pod-network.aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.256 [INFO][5040] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.256 [INFO][5040] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.261 [WARNING][5040] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" HandleID="k8s-pod-network.aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.261 [INFO][5040] ipam_plugin.go 445: Releasing address using workloadID ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" HandleID="k8s-pod-network.aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Workload="localhost-k8s-coredns--76f75df574--4rt2z-eth0" Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.264 [INFO][5040] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:45.269291 containerd[1602]: 2024-10-09 07:23:45.266 [INFO][5030] k8s.go 621: Teardown processing complete. ContainerID="aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a" Oct 9 07:23:45.269819 containerd[1602]: time="2024-10-09T07:23:45.269343895Z" level=info msg="TearDown network for sandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\" successfully" Oct 9 07:23:45.273588 containerd[1602]: time="2024-10-09T07:23:45.273554315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:23:45.273656 containerd[1602]: time="2024-10-09T07:23:45.273613367Z" level=info msg="RemovePodSandbox \"aa24baddaf2c40503bdbefb93741666b4b10549f64c9b95e6b0d5a7467726e0a\" returns successfully" Oct 9 07:23:45.274093 containerd[1602]: time="2024-10-09T07:23:45.274065384Z" level=info msg="StopPodSandbox for \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\"" Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.311 [WARNING][5067] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0", GenerateName:"calico-kube-controllers-7cb586c8b9-", Namespace:"calico-system", SelfLink:"", UID:"7cb1bce7-bd6f-458f-8664-2b72f8e26245", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb586c8b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f", Pod:"calico-kube-controllers-7cb586c8b9-kflrc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc60215b4fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.312 [INFO][5067] k8s.go 608: Cleaning up netns ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.312 [INFO][5067] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" iface="eth0" netns="" Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.312 [INFO][5067] k8s.go 615: Releasing IP address(es) ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.312 [INFO][5067] utils.go 188: Calico CNI releasing IP address ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.338 [INFO][5078] ipam_plugin.go 417: Releasing address using handleID ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" HandleID="k8s-pod-network.85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.338 [INFO][5078] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.338 [INFO][5078] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.345 [WARNING][5078] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" HandleID="k8s-pod-network.85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.345 [INFO][5078] ipam_plugin.go 445: Releasing address using workloadID ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" HandleID="k8s-pod-network.85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.346 [INFO][5078] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:45.359910 containerd[1602]: 2024-10-09 07:23:45.352 [INFO][5067] k8s.go 621: Teardown processing complete. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:45.360332 containerd[1602]: time="2024-10-09T07:23:45.359939134Z" level=info msg="TearDown network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\" successfully" Oct 9 07:23:45.360332 containerd[1602]: time="2024-10-09T07:23:45.359968670Z" level=info msg="StopPodSandbox for \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\" returns successfully" Oct 9 07:23:45.360393 containerd[1602]: time="2024-10-09T07:23:45.360367708Z" level=info msg="RemovePodSandbox for \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\"" Oct 9 07:23:45.360423 containerd[1602]: time="2024-10-09T07:23:45.360407873Z" level=info msg="Forcibly stopping sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\"" Oct 9 07:23:45.373108 sshd[4982]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:45.378983 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:56628.service - OpenSSH per-connection server daemon (10.0.0.1:56628). Oct 9 07:23:45.379506 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:56620.service: Deactivated successfully. Oct 9 07:23:45.383652 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:23:45.383950 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:23:45.385354 systemd-logind[1575]: Removed session 16. Oct 9 07:23:45.417970 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 56628 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:45.419805 sshd[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:45.424954 systemd-logind[1575]: New session 17 of user core. Oct 9 07:23:45.431882 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.409 [WARNING][5103] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0", GenerateName:"calico-kube-controllers-7cb586c8b9-", Namespace:"calico-system", SelfLink:"", UID:"7cb1bce7-bd6f-458f-8664-2b72f8e26245", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb586c8b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0df6a74c416ef87e9bf56654c948b15dc004e0600596a4798f6733690b67603f", Pod:"calico-kube-controllers-7cb586c8b9-kflrc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc60215b4fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.409 [INFO][5103] k8s.go 608: Cleaning up netns ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.409 [INFO][5103] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" iface="eth0" netns="" Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.410 [INFO][5103] k8s.go 615: Releasing IP address(es) ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.410 [INFO][5103] utils.go 188: Calico CNI releasing IP address ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.434 [INFO][5116] ipam_plugin.go 417: Releasing address using handleID ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" HandleID="k8s-pod-network.85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.434 [INFO][5116] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.434 [INFO][5116] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.441 [WARNING][5116] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" HandleID="k8s-pod-network.85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.441 [INFO][5116] ipam_plugin.go 445: Releasing address using workloadID ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" HandleID="k8s-pod-network.85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Workload="localhost-k8s-calico--kube--controllers--7cb586c8b9--kflrc-eth0" Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.442 [INFO][5116] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:45.447716 containerd[1602]: 2024-10-09 07:23:45.444 [INFO][5103] k8s.go 621: Teardown processing complete. ContainerID="85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e" Oct 9 07:23:45.448132 containerd[1602]: time="2024-10-09T07:23:45.447769554Z" level=info msg="TearDown network for sandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\" successfully" Oct 9 07:23:45.452107 containerd[1602]: time="2024-10-09T07:23:45.452022374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:23:45.452189 containerd[1602]: time="2024-10-09T07:23:45.452141888Z" level=info msg="RemovePodSandbox \"85f0050f0319861620cb281b1ab72ddf1a37476741e04d88fb159e97124d233e\" returns successfully" Oct 9 07:23:45.452735 containerd[1602]: time="2024-10-09T07:23:45.452697311Z" level=info msg="StopPodSandbox for \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\"" Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.487 [WARNING][5142] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--82p6t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65", Pod:"coredns-76f75df574-82p6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califee2f49e6d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.487 [INFO][5142] k8s.go 608: Cleaning up netns ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.487 [INFO][5142] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" iface="eth0" netns="" Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.487 [INFO][5142] k8s.go 615: Releasing IP address(es) ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.487 [INFO][5142] utils.go 188: Calico CNI releasing IP address ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.508 [INFO][5150] ipam_plugin.go 417: Releasing address using handleID ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" HandleID="k8s-pod-network.3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.508 [INFO][5150] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.509 [INFO][5150] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.514 [WARNING][5150] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" HandleID="k8s-pod-network.3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.514 [INFO][5150] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" HandleID="k8s-pod-network.3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.516 [INFO][5150] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:45.522206 containerd[1602]: 2024-10-09 07:23:45.519 [INFO][5142] k8s.go 621: Teardown processing complete. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:45.522962 containerd[1602]: time="2024-10-09T07:23:45.522260245Z" level=info msg="TearDown network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\" successfully" Oct 9 07:23:45.522962 containerd[1602]: time="2024-10-09T07:23:45.522287997Z" level=info msg="StopPodSandbox for \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\" returns successfully" Oct 9 07:23:45.522962 containerd[1602]: time="2024-10-09T07:23:45.522885979Z" level=info msg="RemovePodSandbox for \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\"" Oct 9 07:23:45.522962 containerd[1602]: time="2024-10-09T07:23:45.522922638Z" level=info msg="Forcibly stopping sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\"" Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.559 [WARNING][5176] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--82p6t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1ca0ccc4-e5cb-48c2-89bf-6eb0e0c75523", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 23, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"678d3dcb3cb64296e48310f6bd62409ad687a2e20c345a9bf3cd4c0bb2ba0c65", Pod:"coredns-76f75df574-82p6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califee2f49e6d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.559 [INFO][5176] k8s.go 608: Cleaning up netns ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.559 [INFO][5176] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" iface="eth0" netns="" Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.559 [INFO][5176] k8s.go 615: Releasing IP address(es) ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.559 [INFO][5176] utils.go 188: Calico CNI releasing IP address ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.583 [INFO][5185] ipam_plugin.go 417: Releasing address using handleID ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" HandleID="k8s-pod-network.3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.584 [INFO][5185] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.584 [INFO][5185] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.588 [WARNING][5185] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" HandleID="k8s-pod-network.3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.588 [INFO][5185] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" HandleID="k8s-pod-network.3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Workload="localhost-k8s-coredns--76f75df574--82p6t-eth0" Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.590 [INFO][5185] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:23:45.596276 containerd[1602]: 2024-10-09 07:23:45.593 [INFO][5176] k8s.go 621: Teardown processing complete. ContainerID="3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45" Oct 9 07:23:45.596997 containerd[1602]: time="2024-10-09T07:23:45.596326640Z" level=info msg="TearDown network for sandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\" successfully" Oct 9 07:23:45.603704 containerd[1602]: time="2024-10-09T07:23:45.603643829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:23:45.603704 containerd[1602]: time="2024-10-09T07:23:45.603702198Z" level=info msg="RemovePodSandbox \"3591e1ad46bb548337290bdf4f1cedac99ad003a9a1214c34fd9ae567c6ada45\" returns successfully" Oct 9 07:23:45.643125 sshd[5104]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:45.652790 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:56632.service - OpenSSH per-connection server daemon (10.0.0.1:56632). Oct 9 07:23:45.653341 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:56628.service: Deactivated successfully. Oct 9 07:23:45.656202 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:23:45.658221 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:23:45.659618 systemd-logind[1575]: Removed session 17. Oct 9 07:23:45.682099 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 56632 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:45.683866 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:45.688195 systemd-logind[1575]: New session 18 of user core. Oct 9 07:23:45.701898 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:23:47.192178 sshd[5192]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:47.205151 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:56636.service - OpenSSH per-connection server daemon (10.0.0.1:56636). Oct 9 07:23:47.208685 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:56632.service: Deactivated successfully. Oct 9 07:23:47.215787 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:23:47.218087 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:23:47.220271 systemd-logind[1575]: Removed session 18. Oct 9 07:23:47.245479 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 56636 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:47.247317 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:47.251620 systemd-logind[1575]: New session 19 of user core. Oct 9 07:23:47.260761 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:23:47.473215 sshd[5233]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:47.481882 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:56642.service - OpenSSH per-connection server daemon (10.0.0.1:56642). Oct 9 07:23:47.482946 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:56636.service: Deactivated successfully. Oct 9 07:23:47.487705 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:23:47.488639 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:23:47.489874 systemd-logind[1575]: Removed session 19. Oct 9 07:23:47.509866 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 56642 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:47.511413 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:47.515353 systemd-logind[1575]: New session 20 of user core. Oct 9 07:23:47.523746 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:23:47.653102 sshd[5247]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:47.657549 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:56642.service: Deactivated successfully. Oct 9 07:23:47.659889 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:23:47.659955 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:23:47.660973 systemd-logind[1575]: Removed session 20. Oct 9 07:23:52.667844 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:54362.service - OpenSSH per-connection server daemon (10.0.0.1:54362). Oct 9 07:23:52.697132 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 54362 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:52.698969 sshd[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:52.703325 systemd-logind[1575]: New session 21 of user core. Oct 9 07:23:52.712797 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:23:52.831000 sshd[5265]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:52.835882 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:54362.service: Deactivated successfully. Oct 9 07:23:52.838446 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:23:52.838600 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:23:52.839865 systemd-logind[1575]: Removed session 21. Oct 9 07:23:55.158756 kubelet[2739]: E1009 07:23:55.158716 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:23:57.841683 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:54372.service - OpenSSH per-connection server daemon (10.0.0.1:54372). Oct 9 07:23:57.869612 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 54372 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:57.871147 sshd[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:57.875334 systemd-logind[1575]: New session 22 of user core. Oct 9 07:23:57.884736 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:23:57.993535 sshd[5313]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:57.997897 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:54372.service: Deactivated successfully. Oct 9 07:23:58.000577 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:23:58.000661 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:23:58.001623 systemd-logind[1575]: Removed session 22. Oct 9 07:24:01.643422 kubelet[2739]: I1009 07:24:01.643351 2739 topology_manager.go:215] "Topology Admit Handler" podUID="b1b2b0f9-fd9a-4e75-9af1-1a769c236665" podNamespace="calico-apiserver" podName="calico-apiserver-686b99ff9c-d76hn" Oct 9 07:24:01.741060 kubelet[2739]: I1009 07:24:01.740989 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b1b2b0f9-fd9a-4e75-9af1-1a769c236665-calico-apiserver-certs\") pod \"calico-apiserver-686b99ff9c-d76hn\" (UID: \"b1b2b0f9-fd9a-4e75-9af1-1a769c236665\") " pod="calico-apiserver/calico-apiserver-686b99ff9c-d76hn" Oct 9 07:24:01.741210 kubelet[2739]: I1009 07:24:01.741136 2739 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcjst\" (UniqueName: \"kubernetes.io/projected/b1b2b0f9-fd9a-4e75-9af1-1a769c236665-kube-api-access-wcjst\") pod \"calico-apiserver-686b99ff9c-d76hn\" (UID: \"b1b2b0f9-fd9a-4e75-9af1-1a769c236665\") " pod="calico-apiserver/calico-apiserver-686b99ff9c-d76hn" Oct 9 07:24:01.955363 containerd[1602]: time="2024-10-09T07:24:01.955228505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-686b99ff9c-d76hn,Uid:b1b2b0f9-fd9a-4e75-9af1-1a769c236665,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:24:02.132342 systemd-networkd[1251]: cali5fef8f01085: Link UP Oct 9 07:24:02.132589 systemd-networkd[1251]: cali5fef8f01085: Gained carrier Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.011 [INFO][5338] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0 calico-apiserver-686b99ff9c- calico-apiserver b1b2b0f9-fd9a-4e75-9af1-1a769c236665 1097 0 2024-10-09 07:24:01 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:686b99ff9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-686b99ff9c-d76hn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5fef8f01085 [] []}} ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Namespace="calico-apiserver" Pod="calico-apiserver-686b99ff9c-d76hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.011 [INFO][5338] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Namespace="calico-apiserver" Pod="calico-apiserver-686b99ff9c-d76hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.043 [INFO][5350] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" HandleID="k8s-pod-network.d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Workload="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.053 [INFO][5350] ipam_plugin.go 270: Auto assigning IP ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" HandleID="k8s-pod-network.d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Workload="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001180b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-686b99ff9c-d76hn", "timestamp":"2024-10-09 07:24:02.043900021 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.053 [INFO][5350] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.053 [INFO][5350] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.053 [INFO][5350] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.055 [INFO][5350] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" host="localhost" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.059 [INFO][5350] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.064 [INFO][5350] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.065 [INFO][5350] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.068 [INFO][5350] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.068 [INFO][5350] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" host="localhost" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.069 [INFO][5350] ipam.go 1685: Creating new handle: k8s-pod-network.d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.119 [INFO][5350] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" host="localhost" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.126 [INFO][5350] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" host="localhost" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.126 [INFO][5350] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" host="localhost" Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.126 [INFO][5350] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:24:02.146485 containerd[1602]: 2024-10-09 07:24:02.126 [INFO][5350] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" HandleID="k8s-pod-network.d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Workload="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" Oct 9 07:24:02.147917 containerd[1602]: 2024-10-09 07:24:02.129 [INFO][5338] k8s.go 386: Populated endpoint ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Namespace="calico-apiserver" Pod="calico-apiserver-686b99ff9c-d76hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0", GenerateName:"calico-apiserver-686b99ff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b1b2b0f9-fd9a-4e75-9af1-1a769c236665", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"686b99ff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-686b99ff9c-d76hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fef8f01085", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:24:02.147917 containerd[1602]: 2024-10-09 07:24:02.130 [INFO][5338] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Namespace="calico-apiserver" Pod="calico-apiserver-686b99ff9c-d76hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" Oct 9 07:24:02.147917 containerd[1602]: 2024-10-09 07:24:02.130 [INFO][5338] dataplane_linux.go 68: Setting the host side veth name to cali5fef8f01085 ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Namespace="calico-apiserver" Pod="calico-apiserver-686b99ff9c-d76hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" Oct 9 07:24:02.147917 containerd[1602]: 2024-10-09 07:24:02.132 [INFO][5338] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Namespace="calico-apiserver" Pod="calico-apiserver-686b99ff9c-d76hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" Oct 9 07:24:02.147917 containerd[1602]: 2024-10-09 07:24:02.133 [INFO][5338] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Namespace="calico-apiserver" Pod="calico-apiserver-686b99ff9c-d76hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0", GenerateName:"calico-apiserver-686b99ff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b1b2b0f9-fd9a-4e75-9af1-1a769c236665", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"686b99ff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab", Pod:"calico-apiserver-686b99ff9c-d76hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fef8f01085", MAC:"1a:e0:b3:04:87:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:24:02.147917 containerd[1602]: 2024-10-09 07:24:02.140 [INFO][5338] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab" Namespace="calico-apiserver" Pod="calico-apiserver-686b99ff9c-d76hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--686b99ff9c--d76hn-eth0" Oct 9 07:24:02.173964 containerd[1602]: time="2024-10-09T07:24:02.173817831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:24:02.173964 containerd[1602]: time="2024-10-09T07:24:02.173887434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:02.173964 containerd[1602]: time="2024-10-09T07:24:02.173903405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:24:02.173964 containerd[1602]: time="2024-10-09T07:24:02.173913595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:02.204792 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:24:02.238148 containerd[1602]: time="2024-10-09T07:24:02.238013877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-686b99ff9c-d76hn,Uid:b1b2b0f9-fd9a-4e75-9af1-1a769c236665,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab\"" Oct 9 07:24:02.239752 containerd[1602]: time="2024-10-09T07:24:02.239682488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:24:02.864152 kubelet[2739]: E1009 07:24:02.864070 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:03.014919 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:43952.service - OpenSSH per-connection server daemon (10.0.0.1:43952). Oct 9 07:24:03.047494 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 43952 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:24:03.049619 sshd[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:24:03.053888 systemd-logind[1575]: New session 23 of user core. Oct 9 07:24:03.064884 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:24:03.190925 sshd[5414]: pam_unix(sshd:session): session closed for user core Oct 9 07:24:03.195360 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:43952.service: Deactivated successfully. Oct 9 07:24:03.198295 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:24:03.198328 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:24:03.199941 systemd-logind[1575]: Removed session 23. Oct 9 07:24:03.856780 systemd-networkd[1251]: cali5fef8f01085: Gained IPv6LL Oct 9 07:24:04.663944 containerd[1602]: time="2024-10-09T07:24:04.663873336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:04.701327 containerd[1602]: time="2024-10-09T07:24:04.701221183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:24:04.702917 containerd[1602]: time="2024-10-09T07:24:04.702846217Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:04.720097 containerd[1602]: time="2024-10-09T07:24:04.720004686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:04.720948 containerd[1602]: time="2024-10-09T07:24:04.720911684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.481193748s" Oct 9 07:24:04.720948 containerd[1602]: time="2024-10-09T07:24:04.720952242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:24:04.723104 containerd[1602]: time="2024-10-09T07:24:04.723031917Z" level=info msg="CreateContainer within sandbox \"d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:24:04.736029 containerd[1602]: time="2024-10-09T07:24:04.735967223Z" level=info msg="CreateContainer within sandbox \"d58dbc7354f99a433de5a958f4136b0201db3f046eff4536cdf419ff76a67cab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf83aa1769b425c56ea2fb2773b33092ad6f72b34e34980450e276f2d7680af4\"" Oct 9 07:24:04.738221 containerd[1602]: time="2024-10-09T07:24:04.736947943Z" level=info msg="StartContainer for \"bf83aa1769b425c56ea2fb2773b33092ad6f72b34e34980450e276f2d7680af4\"" Oct 9 07:24:05.274564 containerd[1602]: time="2024-10-09T07:24:05.274448529Z" level=info msg="StartContainer for \"bf83aa1769b425c56ea2fb2773b33092ad6f72b34e34980450e276f2d7680af4\" returns successfully" Oct 9 07:24:06.293502 kubelet[2739]: I1009 07:24:06.293142 2739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-686b99ff9c-d76hn" podStartSLOduration=2.811181305 podStartE2EDuration="5.293087458s" podCreationTimestamp="2024-10-09 07:24:01 +0000 UTC" firstStartedPulling="2024-10-09 07:24:02.239386211 +0000 UTC m=+77.472127252" lastFinishedPulling="2024-10-09 07:24:04.721292364 +0000 UTC m=+79.954033405" observedRunningTime="2024-10-09 07:24:06.292807622 +0000 UTC m=+81.525548664" watchObservedRunningTime="2024-10-09 07:24:06.293087458 +0000 UTC m=+81.525828499" Oct 9 07:24:08.202743 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:43962.service - OpenSSH per-connection server daemon (10.0.0.1:43962). Oct 9 07:24:08.237182 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 43962 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:24:08.238965 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:24:08.243329 systemd-logind[1575]: New session 24 of user core. Oct 9 07:24:08.251724 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:24:08.378515 sshd[5493]: pam_unix(sshd:session): session closed for user core Oct 9 07:24:08.383097 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:43962.service: Deactivated successfully. Oct 9 07:24:08.385951 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:24:08.386077 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:24:08.387244 systemd-logind[1575]: Removed session 24.