Oct 9 01:01:13.876997 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 01:01:13.877017 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:01:13.877028 kernel: BIOS-provided physical RAM map: Oct 9 01:01:13.877034 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 01:01:13.877040 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 01:01:13.877046 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 01:01:13.877053 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 01:01:13.877060 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 01:01:13.877066 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 01:01:13.877072 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 01:01:13.877080 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 9 01:01:13.877087 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 9 01:01:13.877093 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 01:01:13.877099 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 01:01:13.877107 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 01:01:13.877113 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 01:01:13.877122 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 01:01:13.877129 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 01:01:13.877135 kernel: BIOS-e820: [mem 0x00000000ffe00000-0x00000000ffffffff] reserved Oct 9 01:01:13.877142 kernel: NX (Execute Disable) protection: active Oct 9 01:01:13.877149 kernel: APIC: Static calls initialized Oct 9 01:01:13.877155 kernel: e820: update [mem 0x9b66b018-0x9b674c57] usable ==> usable Oct 9 01:01:13.877162 kernel: e820: update [mem 0x9b66b018-0x9b674c57] usable ==> usable Oct 9 01:01:13.877169 kernel: e820: update [mem 0x9b62e018-0x9b66ae57] usable ==> usable Oct 9 01:01:13.877175 kernel: e820: update [mem 0x9b62e018-0x9b66ae57] usable ==> usable Oct 9 01:01:13.877182 kernel: extended physical RAM map: Oct 9 01:01:13.877188 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 01:01:13.877197 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 01:01:13.877204 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 01:01:13.877210 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 01:01:13.877217 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 01:01:13.877224 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 01:01:13.877230 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 01:01:13.877237 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b62e017] usable Oct 9 01:01:13.877244 kernel: reserve setup_data: [mem 0x000000009b62e018-0x000000009b66ae57] usable Oct 9 01:01:13.877250 kernel: reserve setup_data: [mem 0x000000009b66ae58-0x000000009b66b017] usable Oct 9 01:01:13.877257 kernel: reserve setup_data: [mem 0x000000009b66b018-0x000000009b674c57] usable Oct 9 01:01:13.877263 kernel: reserve setup_data: [mem 0x000000009b674c58-0x000000009c8eefff] usable Oct 9 01:01:13.877272 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 9 01:01:13.877279 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 01:01:13.877289 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 01:01:13.877296 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 01:01:13.877303 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 01:01:13.877310 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 01:01:13.877331 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 01:01:13.877338 kernel: reserve setup_data: [mem 0x00000000ffe00000-0x00000000ffffffff] reserved Oct 9 01:01:13.877345 kernel: efi: EFI v2.7 by EDK II Oct 9 01:01:13.877352 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b6b3018 RNG=0x9cb73018 Oct 9 01:01:13.877359 kernel: random: crng init done Oct 9 01:01:13.877366 kernel: efi: Remove mem127: MMIO range=[0xffe00000-0xffffffff] (2MB) from e820 map Oct 9 01:01:13.877373 kernel: e820: remove [mem 0xffe00000-0xffffffff] reserved Oct 9 01:01:13.877380 kernel: secureboot: Secure boot disabled Oct 9 01:01:13.877387 kernel: SMBIOS 2.8 present. Oct 9 01:01:13.877394 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 9 01:01:13.877403 kernel: Hypervisor detected: KVM Oct 9 01:01:13.877410 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 01:01:13.877417 kernel: kvm-clock: using sched offset of 4455650450 cycles Oct 9 01:01:13.877424 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 01:01:13.877432 kernel: tsc: Detected 2794.750 MHz processor Oct 9 01:01:13.877439 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 01:01:13.877447 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 01:01:13.877454 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 9 01:01:13.877461 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 9 01:01:13.877468 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 01:01:13.877477 kernel: Using GB pages for direct mapping Oct 9 01:01:13.877484 kernel: ACPI: Early table checksum verification disabled Oct 9 01:01:13.877491 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 9 01:01:13.877498 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 9 01:01:13.877505 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:13.877512 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:13.877519 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 9 01:01:13.877527 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:13.877534 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:13.877543 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:13.877550 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:13.877557 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 9 01:01:13.877564 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 9 01:01:13.877572 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 9 01:01:13.877588 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 9 01:01:13.877596 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 9 01:01:13.877603 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 9 01:01:13.877612 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 9 01:01:13.877619 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 9 01:01:13.877626 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 9 01:01:13.877633 kernel: No NUMA configuration found Oct 9 01:01:13.877641 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 9 01:01:13.877648 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 9 01:01:13.877655 kernel: Zone ranges: Oct 9 01:01:13.877662 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 01:01:13.877669 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 9 01:01:13.877676 kernel: Normal empty Oct 9 01:01:13.877686 kernel: Movable zone start for each node Oct 9 01:01:13.877693 kernel: Early memory node ranges Oct 9 01:01:13.877700 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 9 01:01:13.877707 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 9 01:01:13.877714 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 9 01:01:13.877721 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 9 01:01:13.877728 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 9 01:01:13.877735 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 9 01:01:13.877742 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 9 01:01:13.877752 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 01:01:13.877759 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 9 01:01:13.877766 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 9 01:01:13.877773 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 01:01:13.877781 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 9 01:01:13.877788 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 9 01:01:13.877795 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 9 01:01:13.877803 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 01:01:13.877816 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 01:01:13.877832 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 01:01:13.877841 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 01:01:13.877850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 01:01:13.877858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 01:01:13.877868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 01:01:13.877877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 01:01:13.877886 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 01:01:13.877895 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 01:01:13.877905 kernel: TSC deadline timer available Oct 9 01:01:13.877930 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 01:01:13.877941 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 01:01:13.877949 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 01:01:13.877958 kernel: kvm-guest: setup PV sched yield Oct 9 01:01:13.877966 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 9 01:01:13.877973 kernel: Booting paravirtualized kernel on KVM Oct 9 01:01:13.877981 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 01:01:13.877988 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 01:01:13.877996 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 01:01:13.878003 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 01:01:13.878013 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 01:01:13.878020 kernel: kvm-guest: PV spinlocks enabled Oct 9 01:01:13.878027 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 01:01:13.878036 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:01:13.878044 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:01:13.878052 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 01:01:13.878059 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:01:13.878069 kernel: Fallback order for Node 0: 0 Oct 9 01:01:13.878077 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 9 01:01:13.878084 kernel: Policy zone: DMA32 Oct 9 01:01:13.878092 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:01:13.878100 kernel: Memory: 2395860K/2567000K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 170884K reserved, 0K cma-reserved) Oct 9 01:01:13.878107 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 01:01:13.878115 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 01:01:13.878122 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 01:01:13.878132 kernel: Dynamic Preempt: voluntary Oct 9 01:01:13.878139 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:01:13.878147 kernel: rcu: RCU event tracing is enabled. Oct 9 01:01:13.878155 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 01:01:13.878163 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:01:13.878170 kernel: Rude variant of Tasks RCU enabled. Oct 9 01:01:13.878178 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:01:13.878185 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:01:13.878193 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 01:01:13.878200 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 01:01:13.878210 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:01:13.878218 kernel: Console: colour dummy device 80x25 Oct 9 01:01:13.878225 kernel: printk: console [ttyS0] enabled Oct 9 01:01:13.878233 kernel: ACPI: Core revision 20230628 Oct 9 01:01:13.878240 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 01:01:13.878248 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 01:01:13.878256 kernel: x2apic enabled Oct 9 01:01:13.878263 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 01:01:13.878271 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 01:01:13.878281 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 01:01:13.878288 kernel: kvm-guest: setup PV IPIs Oct 9 01:01:13.878295 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 01:01:13.878303 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 01:01:13.878310 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 9 01:01:13.878357 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 01:01:13.878365 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 01:01:13.878372 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 01:01:13.878380 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 01:01:13.878390 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 01:01:13.878398 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 01:01:13.878405 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 01:01:13.878413 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 01:01:13.878420 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 01:01:13.878428 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 01:01:13.878435 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 01:01:13.878443 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 01:01:13.878453 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 01:01:13.878461 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 01:01:13.878468 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 01:01:13.878476 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 01:01:13.878483 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 01:01:13.878491 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 01:01:13.878498 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 01:01:13.878506 kernel: Freeing SMP alternatives memory: 32K Oct 9 01:01:13.878514 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:01:13.878523 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:01:13.878531 kernel: landlock: Up and running. Oct 9 01:01:13.878538 kernel: SELinux: Initializing. Oct 9 01:01:13.878546 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:01:13.878553 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:01:13.878561 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 01:01:13.878569 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:01:13.878576 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:01:13.878594 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:01:13.878604 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 01:01:13.878611 kernel: ... version: 0 Oct 9 01:01:13.878619 kernel: ... bit width: 48 Oct 9 01:01:13.878626 kernel: ... generic registers: 6 Oct 9 01:01:13.878634 kernel: ... value mask: 0000ffffffffffff Oct 9 01:01:13.878641 kernel: ... max period: 00007fffffffffff Oct 9 01:01:13.878649 kernel: ... fixed-purpose events: 0 Oct 9 01:01:13.878656 kernel: ... event mask: 000000000000003f Oct 9 01:01:13.878664 kernel: signal: max sigframe size: 1776 Oct 9 01:01:13.878673 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:01:13.878681 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:01:13.878689 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:01:13.878696 kernel: smpboot: x86: Booting SMP configuration: Oct 9 01:01:13.878704 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 01:01:13.878711 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 01:01:13.878718 kernel: smpboot: Max logical packages: 1 Oct 9 01:01:13.878726 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 9 01:01:13.878733 kernel: devtmpfs: initialized Oct 9 01:01:13.878743 kernel: x86/mm: Memory block size: 128MB Oct 9 01:01:13.878751 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 9 01:01:13.878758 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 9 01:01:13.878766 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 9 01:01:13.878774 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 9 01:01:13.878781 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 9 01:01:13.878789 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:01:13.878796 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 01:01:13.878804 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:01:13.878813 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:01:13.878830 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:01:13.878838 kernel: audit: type=2000 audit(1728435674.306:1): state=initialized audit_enabled=0 res=1 Oct 9 01:01:13.878845 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:01:13.878859 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 01:01:13.878873 kernel: cpuidle: using governor menu Oct 9 01:01:13.878888 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:01:13.878902 kernel: dca service started, version 1.12.1 Oct 9 01:01:13.878910 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 01:01:13.878938 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 01:01:13.878956 kernel: PCI: Using configuration type 1 for base access Oct 9 01:01:13.878980 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 01:01:13.878992 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:01:13.879002 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:01:13.879012 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:01:13.879019 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:01:13.879027 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:01:13.879034 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:01:13.879045 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:01:13.879052 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:01:13.879060 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:01:13.879067 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 01:01:13.879075 kernel: ACPI: Interpreter enabled Oct 9 01:01:13.879082 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 01:01:13.879090 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 01:01:13.879097 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 01:01:13.879105 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 01:01:13.879114 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 01:01:13.879122 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:01:13.879302 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:01:13.879450 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 01:01:13.879571 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 01:01:13.879591 kernel: PCI host bridge to bus 0000:00 Oct 9 01:01:13.879716 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 01:01:13.879833 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 01:01:13.879945 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 01:01:13.880055 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 01:01:13.880238 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 01:01:13.880390 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 9 01:01:13.880532 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:01:13.880683 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 01:01:13.880826 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 01:01:13.880955 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 9 01:01:13.881075 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 9 01:01:13.881194 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 9 01:01:13.881314 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 9 01:01:13.881457 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 01:01:13.881604 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 01:01:13.881771 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 9 01:01:13.881893 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 9 01:01:13.882013 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 9 01:01:13.882142 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 01:01:13.882264 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 9 01:01:13.882412 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 9 01:01:13.882559 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 9 01:01:13.882731 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 01:01:13.882853 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 9 01:01:13.882973 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 9 01:01:13.883093 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 9 01:01:13.883212 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 9 01:01:13.883379 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 01:01:13.883506 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 01:01:13.883642 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 01:01:13.883763 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 9 01:01:13.883880 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 9 01:01:13.884005 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 01:01:13.884122 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 9 01:01:13.884136 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 01:01:13.884144 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 01:01:13.884152 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 01:01:13.884160 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 01:01:13.884167 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 01:01:13.884174 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 01:01:13.884182 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 01:01:13.884190 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 01:01:13.884197 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 01:01:13.884207 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 01:01:13.884215 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 01:01:13.884222 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 01:01:13.884229 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 01:01:13.884237 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 01:01:13.884244 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 01:01:13.884252 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 01:01:13.884259 kernel: iommu: Default domain type: Translated Oct 9 01:01:13.884267 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 01:01:13.884276 kernel: efivars: Registered efivars operations Oct 9 01:01:13.884284 kernel: PCI: Using ACPI for IRQ routing Oct 9 01:01:13.884291 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 01:01:13.884299 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 9 01:01:13.884306 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 9 01:01:13.884313 kernel: e820: reserve RAM buffer [mem 0x9b62e018-0x9bffffff] Oct 9 01:01:13.884332 kernel: e820: reserve RAM buffer [mem 0x9b66b018-0x9bffffff] Oct 9 01:01:13.884340 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 9 01:01:13.884347 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 9 01:01:13.884471 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 01:01:13.884599 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 01:01:13.884721 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 01:01:13.884731 kernel: vgaarb: loaded Oct 9 01:01:13.884739 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 01:01:13.884746 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 01:01:13.884754 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 01:01:13.884761 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:01:13.884769 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:01:13.884780 kernel: pnp: PnP ACPI init Oct 9 01:01:13.884917 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 01:01:13.884929 kernel: pnp: PnP ACPI: found 6 devices Oct 9 01:01:13.884937 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 01:01:13.884944 kernel: NET: Registered PF_INET protocol family Oct 9 01:01:13.884952 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 01:01:13.884959 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 01:01:13.884967 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:01:13.884978 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:01:13.884985 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 01:01:13.884993 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 01:01:13.885001 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:01:13.885008 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:01:13.885016 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:01:13.885023 kernel: NET: Registered PF_XDP protocol family Oct 9 01:01:13.885145 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 9 01:01:13.885270 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 9 01:01:13.885397 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 01:01:13.885508 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 01:01:13.885635 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 01:01:13.885746 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 01:01:13.885858 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 01:01:13.885967 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 9 01:01:13.885977 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:01:13.885989 kernel: Initialise system trusted keyrings Oct 9 01:01:13.886013 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 01:01:13.886024 kernel: Key type asymmetric registered Oct 9 01:01:13.886031 kernel: Asymmetric key parser 'x509' registered Oct 9 01:01:13.886039 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 01:01:13.886047 kernel: io scheduler mq-deadline registered Oct 9 01:01:13.886055 kernel: io scheduler kyber registered Oct 9 01:01:13.886063 kernel: io scheduler bfq registered Oct 9 01:01:13.886071 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 01:01:13.886081 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 01:01:13.886089 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 01:01:13.886097 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 01:01:13.886105 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:01:13.886112 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 01:01:13.886120 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 01:01:13.886128 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 01:01:13.886136 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 01:01:13.886263 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 01:01:13.886277 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 01:01:13.886460 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 01:01:13.886575 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T01:01:13 UTC (1728435673) Oct 9 01:01:13.886702 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 01:01:13.886712 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 01:01:13.886720 kernel: efifb: probing for efifb Oct 9 01:01:13.886728 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 9 01:01:13.886736 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 9 01:01:13.886748 kernel: efifb: scrolling: redraw Oct 9 01:01:13.886755 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 9 01:01:13.886763 kernel: Console: switching to colour frame buffer device 160x50 Oct 9 01:01:13.886771 kernel: fb0: EFI VGA frame buffer device Oct 9 01:01:13.886781 kernel: pstore: Using crash dump compression: deflate Oct 9 01:01:13.886789 kernel: pstore: Registered efi_pstore as persistent store backend Oct 9 01:01:13.886799 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:01:13.886807 kernel: Segment Routing with IPv6 Oct 9 01:01:13.886815 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:01:13.886823 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:01:13.886831 kernel: Key type dns_resolver registered Oct 9 01:01:13.886838 kernel: IPI shorthand broadcast: enabled Oct 9 01:01:13.886846 kernel: sched_clock: Marking stable (574002680, 136907847)->(755935699, -45025172) Oct 9 01:01:13.886854 kernel: registered taskstats version 1 Oct 9 01:01:13.886861 kernel: Loading compiled-in X.509 certificates Oct 9 01:01:13.886872 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 01:01:13.886880 kernel: Key type .fscrypt registered Oct 9 01:01:13.886888 kernel: Key type fscrypt-provisioning registered Oct 9 01:01:13.886896 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:01:13.886903 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:01:13.886914 kernel: ima: No architecture policies found Oct 9 01:01:13.886921 kernel: clk: Disabling unused clocks Oct 9 01:01:13.886929 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 01:01:13.886937 kernel: Write protecting the kernel read-only data: 36864k Oct 9 01:01:13.886947 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 01:01:13.886955 kernel: Run /init as init process Oct 9 01:01:13.886963 kernel: with arguments: Oct 9 01:01:13.886971 kernel: /init Oct 9 01:01:13.886979 kernel: with environment: Oct 9 01:01:13.886986 kernel: HOME=/ Oct 9 01:01:13.886994 kernel: TERM=linux Oct 9 01:01:13.887002 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:01:13.887012 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:01:13.887025 systemd[1]: Detected virtualization kvm. Oct 9 01:01:13.887033 systemd[1]: Detected architecture x86-64. Oct 9 01:01:13.887041 systemd[1]: Running in initrd. Oct 9 01:01:13.887049 systemd[1]: No hostname configured, using default hostname. Oct 9 01:01:13.887057 systemd[1]: Hostname set to . Oct 9 01:01:13.887066 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:01:13.887074 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:01:13.887085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:01:13.887093 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:01:13.887102 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:01:13.887111 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:01:13.887119 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:01:13.887128 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:01:13.887138 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:01:13.887149 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:01:13.887157 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:01:13.887166 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:01:13.887174 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:01:13.887182 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:01:13.887191 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:01:13.887199 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:01:13.887207 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:01:13.887218 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:01:13.887227 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:01:13.887235 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:01:13.887243 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:01:13.887252 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:01:13.887260 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:01:13.887268 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:01:13.887277 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:01:13.887287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:01:13.887295 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:01:13.887304 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:01:13.887312 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:01:13.887334 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:01:13.887342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:01:13.887351 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:01:13.887359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:01:13.887367 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:01:13.887396 systemd-journald[193]: Collecting audit messages is disabled. Oct 9 01:01:13.887418 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:01:13.887426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:13.887435 systemd-journald[193]: Journal started Oct 9 01:01:13.887453 systemd-journald[193]: Runtime Journal (/run/log/journal/81491c02292b4f82b9a9fb76057e4be8) is 6.0M, max 48.3M, 42.2M free. Oct 9 01:01:13.880519 systemd-modules-load[194]: Inserted module 'overlay' Oct 9 01:01:13.897386 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:01:13.897410 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:01:13.898282 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:01:13.902555 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:01:13.907435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:01:13.910833 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:01:13.913454 kernel: Bridge firewalling registered Oct 9 01:01:13.912610 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 9 01:01:13.913781 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:01:13.916691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:01:13.919390 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:01:13.920450 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:01:13.923219 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:01:13.931654 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:01:13.933154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:01:13.938916 dracut-cmdline[223]: dracut-dracut-053 Oct 9 01:01:13.944259 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:01:13.942442 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:01:13.977614 systemd-resolved[234]: Positive Trust Anchors: Oct 9 01:01:13.977629 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:01:13.977668 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:01:13.980778 systemd-resolved[234]: Defaulting to hostname 'linux'. Oct 9 01:01:13.981985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:01:13.988701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:01:14.045358 kernel: SCSI subsystem initialized Oct 9 01:01:14.057347 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:01:14.070347 kernel: iscsi: registered transport (tcp) Oct 9 01:01:14.091343 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:01:14.091367 kernel: QLogic iSCSI HBA Driver Oct 9 01:01:14.136428 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:01:14.148438 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:01:14.171747 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:01:14.171778 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:01:14.172773 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:01:14.212340 kernel: raid6: avx2x4 gen() 30395 MB/s Oct 9 01:01:14.229334 kernel: raid6: avx2x2 gen() 31223 MB/s Oct 9 01:01:14.246446 kernel: raid6: avx2x1 gen() 25820 MB/s Oct 9 01:01:14.246463 kernel: raid6: using algorithm avx2x2 gen() 31223 MB/s Oct 9 01:01:14.264417 kernel: raid6: .... xor() 19995 MB/s, rmw enabled Oct 9 01:01:14.264431 kernel: raid6: using avx2x2 recovery algorithm Oct 9 01:01:14.284338 kernel: xor: automatically using best checksumming function avx Oct 9 01:01:14.435351 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:01:14.448406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:01:14.461593 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:01:14.473666 systemd-udevd[416]: Using default interface naming scheme 'v255'. Oct 9 01:01:14.478225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:01:14.490495 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:01:14.504109 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Oct 9 01:01:14.537708 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:01:14.552445 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:01:14.615451 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:01:14.624510 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:01:14.640120 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:01:14.644696 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:01:14.652013 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 01:01:14.654007 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 01:01:14.654181 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:01:14.648669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:01:14.659756 kernel: GPT:9289727 != 19775487 Oct 9 01:01:14.659781 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:01:14.659791 kernel: GPT:9289727 != 19775487 Oct 9 01:01:14.659801 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:01:14.659811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:01:14.652354 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:01:14.662334 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 01:01:14.668340 kernel: libata version 3.00 loaded. Oct 9 01:01:14.672587 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:01:14.679583 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 01:01:14.679824 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 01:01:14.684334 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 01:01:14.684566 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 01:01:14.687344 kernel: scsi host0: ahci Oct 9 01:01:14.690340 kernel: scsi host1: ahci Oct 9 01:01:14.693294 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:01:14.694791 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 01:01:14.694807 kernel: AES CTR mode by8 optimization enabled Oct 9 01:01:14.694817 kernel: scsi host2: ahci Oct 9 01:01:14.699575 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:01:14.701701 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (475) Oct 9 01:01:14.702330 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:01:14.708359 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (467) Oct 9 01:01:14.708385 kernel: scsi host3: ahci Oct 9 01:01:14.708673 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:01:14.709881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:01:14.710077 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:14.711481 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:01:14.718362 kernel: scsi host4: ahci Oct 9 01:01:14.720549 kernel: scsi host5: ahci Oct 9 01:01:14.720734 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Oct 9 01:01:14.720751 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Oct 9 01:01:14.721224 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Oct 9 01:01:14.722964 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Oct 9 01:01:14.723007 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Oct 9 01:01:14.723019 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Oct 9 01:01:14.727679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:01:14.736358 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 01:01:14.741051 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 01:01:14.742593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:14.761466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:01:14.766454 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 01:01:14.767712 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 01:01:14.781546 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:01:14.784917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:01:14.793862 disk-uuid[559]: Primary Header is updated. Oct 9 01:01:14.793862 disk-uuid[559]: Secondary Entries is updated. Oct 9 01:01:14.793862 disk-uuid[559]: Secondary Header is updated. Oct 9 01:01:14.799353 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:01:14.803345 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:01:14.805717 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:01:15.029349 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 01:01:15.029419 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 01:01:15.030333 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 01:01:15.031350 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 01:01:15.031378 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 01:01:15.032342 kernel: ata3.00: applying bridge limits Oct 9 01:01:15.032378 kernel: ata3.00: configured for UDMA/100 Oct 9 01:01:15.033350 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 01:01:15.038342 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 01:01:15.038361 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 01:01:15.081932 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 01:01:15.082160 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 01:01:15.098338 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 01:01:15.803341 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:01:15.804002 disk-uuid[560]: The operation has completed successfully. Oct 9 01:01:15.830457 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:01:15.830583 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:01:15.860483 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:01:15.865726 sh[595]: Success Oct 9 01:01:15.878372 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 01:01:15.909116 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:01:15.918785 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:01:15.923260 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:01:15.932734 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 01:01:15.932763 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:01:15.932774 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:01:15.933728 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:01:15.934441 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:01:15.938892 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:01:15.941074 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:01:15.952479 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:01:15.955093 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:01:15.964665 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:01:15.964764 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:01:15.964780 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:01:15.968344 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:01:15.976656 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:01:15.978332 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:01:15.990128 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:01:15.996487 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:01:16.054252 ignition[689]: Ignition 2.19.0 Oct 9 01:01:16.054270 ignition[689]: Stage: fetch-offline Oct 9 01:01:16.054424 ignition[689]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:16.054441 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:01:16.054688 ignition[689]: parsed url from cmdline: "" Oct 9 01:01:16.054694 ignition[689]: no config URL provided Oct 9 01:01:16.054714 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:01:16.054727 ignition[689]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:01:16.054785 ignition[689]: op(1): [started] loading QEMU firmware config module Oct 9 01:01:16.054792 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 01:01:16.062432 ignition[689]: op(1): [finished] loading QEMU firmware config module Oct 9 01:01:16.083608 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:01:16.102075 ignition[689]: parsing config with SHA512: 0386b9d39ea9ab0fbddc43a6c7600c25b38176423c6e43ac3b848a963cc6041a3f228db003e72eee03d5f0ec02706d9c9c0579389c5f41a80741d47b398bc093 Oct 9 01:01:16.103572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:01:16.108223 unknown[689]: fetched base config from "system" Oct 9 01:01:16.108234 unknown[689]: fetched user config from "qemu" Oct 9 01:01:16.110487 ignition[689]: fetch-offline: fetch-offline passed Oct 9 01:01:16.112892 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:01:16.110583 ignition[689]: Ignition finished successfully Oct 9 01:01:16.133355 systemd-networkd[783]: lo: Link UP Oct 9 01:01:16.133368 systemd-networkd[783]: lo: Gained carrier Oct 9 01:01:16.135254 systemd-networkd[783]: Enumeration completed Oct 9 01:01:16.135394 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:01:16.135976 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:16.135980 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:01:16.137062 systemd-networkd[783]: eth0: Link UP Oct 9 01:01:16.137066 systemd-networkd[783]: eth0: Gained carrier Oct 9 01:01:16.137072 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:16.137695 systemd[1]: Reached target network.target - Network. Oct 9 01:01:16.138584 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 01:01:16.142430 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:01:16.153386 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:01:16.159259 ignition[786]: Ignition 2.19.0 Oct 9 01:01:16.159270 ignition[786]: Stage: kargs Oct 9 01:01:16.159501 ignition[786]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:16.159523 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:01:16.160334 ignition[786]: kargs: kargs passed Oct 9 01:01:16.163496 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:01:16.160376 ignition[786]: Ignition finished successfully Oct 9 01:01:16.174661 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:01:16.184837 ignition[795]: Ignition 2.19.0 Oct 9 01:01:16.184847 ignition[795]: Stage: disks Oct 9 01:01:16.185002 ignition[795]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:16.185012 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:01:16.187365 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:01:16.185774 ignition[795]: disks: disks passed Oct 9 01:01:16.190234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:01:16.185815 ignition[795]: Ignition finished successfully Oct 9 01:01:16.191665 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:01:16.192954 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:01:16.194643 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:01:16.196808 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:01:16.211487 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:01:16.223004 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 01:01:16.230801 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:01:16.247484 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:01:16.343357 kernel: EXT4-fs (vda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 01:01:16.344226 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:01:16.345063 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:01:16.357444 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:01:16.359411 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:01:16.360937 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 01:01:16.368300 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Oct 9 01:01:16.368337 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:01:16.368353 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:01:16.368370 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:01:16.360983 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:01:16.374157 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:01:16.361011 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:01:16.367208 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:01:16.370893 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:01:16.375568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:01:16.408387 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:01:16.412437 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:01:16.417568 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:01:16.421493 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:01:16.487618 systemd-resolved[234]: Detected conflict on linux IN A 10.0.0.81 Oct 9 01:01:16.487634 systemd-resolved[234]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Oct 9 01:01:16.497664 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:01:16.507470 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:01:16.510270 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:01:16.518393 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:01:16.534642 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:01:16.539814 ignition[929]: INFO : Ignition 2.19.0 Oct 9 01:01:16.539814 ignition[929]: INFO : Stage: mount Oct 9 01:01:16.541703 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:16.541703 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:01:16.541703 ignition[929]: INFO : mount: mount passed Oct 9 01:01:16.541703 ignition[929]: INFO : Ignition finished successfully Oct 9 01:01:16.547075 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:01:16.558463 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:01:16.934101 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:01:16.945649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:01:16.952356 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Oct 9 01:01:16.954413 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:01:16.954441 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:01:16.954456 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:01:16.957332 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:01:16.959233 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:01:16.977273 ignition[958]: INFO : Ignition 2.19.0 Oct 9 01:01:16.977273 ignition[958]: INFO : Stage: files Oct 9 01:01:16.979530 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:16.979530 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:01:16.979530 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:01:16.979530 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:01:16.979530 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:01:16.988106 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:01:16.988106 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:01:16.988106 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:01:16.988106 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:01:16.988106 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 01:01:16.983295 unknown[958]: wrote ssh authorized keys file for user: core Oct 9 01:01:17.028000 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 01:01:17.115716 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:01:17.115716 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:01:17.119945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:01:17.119945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:01:17.119945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:01:17.119945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:01:17.119945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:01:17.119945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:01:17.119945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:01:17.132819 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:01:17.132819 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:01:17.132819 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:01:17.132819 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:01:17.132819 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:01:17.132819 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 9 01:01:17.472494 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 01:01:17.912082 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:01:17.912082 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 01:01:17.915792 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:01:17.915792 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:01:17.915792 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 01:01:17.915792 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 9 01:01:17.915792 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:01:17.915792 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:01:17.915792 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 9 01:01:17.915792 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 01:01:17.936587 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:01:17.941304 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:01:17.942916 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 01:01:17.942916 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:01:17.942916 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:01:17.942916 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:01:17.942916 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:01:17.942916 ignition[958]: INFO : files: files passed Oct 9 01:01:17.942916 ignition[958]: INFO : Ignition finished successfully Oct 9 01:01:17.953679 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:01:17.961440 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:01:17.963756 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:01:17.965750 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:01:17.965865 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:01:17.977458 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 01:01:17.980799 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:01:17.980799 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:01:17.983910 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:01:17.987417 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:01:17.988843 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:01:17.999433 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:01:18.006426 systemd-networkd[783]: eth0: Gained IPv6LL Oct 9 01:01:18.022468 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:01:18.022585 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:01:18.024836 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:01:18.026915 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:01:18.028981 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:01:18.029673 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:01:18.045260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:01:18.046509 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:01:18.058746 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:01:18.058894 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:01:18.073305 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:01:18.109387 ignition[1013]: INFO : Ignition 2.19.0 Oct 9 01:01:18.109387 ignition[1013]: INFO : Stage: umount Oct 9 01:01:18.109387 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:18.109387 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:01:18.109387 ignition[1013]: INFO : umount: umount passed Oct 9 01:01:18.109387 ignition[1013]: INFO : Ignition finished successfully Oct 9 01:01:18.073625 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:01:18.073727 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:01:18.074292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:01:18.074628 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:01:18.074951 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:01:18.075278 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:01:18.075622 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:01:18.075949 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:01:18.076274 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:01:18.076623 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:01:18.076962 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:01:18.077301 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:01:18.077628 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:01:18.077730 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:01:18.078148 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:01:18.078710 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:01:18.079004 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:01:18.079118 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:01:18.079369 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:01:18.079480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:01:18.080031 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:01:18.080132 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:01:18.080449 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:01:18.080696 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:01:18.084355 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:01:18.084709 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:01:18.085024 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:01:18.085370 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:01:18.085465 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:01:18.085882 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:01:18.085963 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:01:18.086384 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:01:18.086495 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:01:18.086875 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:01:18.086970 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:01:18.087985 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:01:18.088278 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:01:18.088387 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:01:18.089256 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:01:18.089607 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:01:18.089705 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:01:18.090005 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:01:18.090095 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:01:18.093256 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:01:18.093381 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:01:18.109365 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:01:18.109490 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:01:18.110639 systemd[1]: Stopped target network.target - Network. Oct 9 01:01:18.112128 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:01:18.112182 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:01:18.113876 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:01:18.113929 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:01:18.115848 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:01:18.115895 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:01:18.118288 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:01:18.118358 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:01:18.120611 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:01:18.122900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:01:18.125362 systemd-networkd[783]: eth0: DHCPv6 lease lost Oct 9 01:01:18.126511 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:01:18.127096 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:01:18.127220 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:01:18.129856 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:01:18.129912 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:01:18.141386 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:01:18.142482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:01:18.142534 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:01:18.144898 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:01:18.147498 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:01:18.147639 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:01:18.155218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:01:18.155287 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:01:18.156772 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:01:18.156821 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:01:18.158892 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:01:18.158952 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:01:18.161816 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:01:18.161933 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:01:18.163694 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:01:18.163849 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:01:18.166441 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:01:18.166505 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:01:18.168401 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:01:18.168440 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:01:18.170233 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:01:18.170279 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:01:18.172294 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:01:18.172355 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:01:18.174377 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:01:18.174424 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:01:18.193430 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:01:18.194777 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:01:18.194827 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:01:18.196917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:01:18.196961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:18.200090 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:01:18.200196 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:01:18.354093 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:01:18.354238 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:01:18.356987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:01:18.357700 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:01:18.357784 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:01:18.363475 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:01:18.371995 systemd[1]: Switching root. Oct 9 01:01:18.400758 systemd-journald[193]: Journal stopped Oct 9 01:01:19.596788 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 9 01:01:19.596852 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:01:19.596866 kernel: SELinux: policy capability open_perms=1 Oct 9 01:01:19.596879 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:01:19.596891 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:01:19.596902 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:01:19.596913 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:01:19.596928 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:01:19.596940 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:01:19.596951 kernel: audit: type=1403 audit(1728435678.827:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:01:19.596963 systemd[1]: Successfully loaded SELinux policy in 40.157ms. Oct 9 01:01:19.596994 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.743ms. Oct 9 01:01:19.597008 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:01:19.597020 systemd[1]: Detected virtualization kvm. Oct 9 01:01:19.597032 systemd[1]: Detected architecture x86-64. Oct 9 01:01:19.597046 systemd[1]: Detected first boot. Oct 9 01:01:19.597060 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:01:19.597074 zram_generator::config[1057]: No configuration found. Oct 9 01:01:19.597092 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:01:19.597115 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 01:01:19.597132 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 01:01:19.597148 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 01:01:19.597166 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:01:19.597183 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:01:19.597203 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:01:19.597219 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:01:19.597234 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:01:19.597247 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:01:19.597259 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:01:19.597276 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:01:19.597288 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:01:19.597301 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:01:19.597313 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:01:19.597343 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:01:19.597360 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:01:19.597377 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:01:19.597395 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 01:01:19.597410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:01:19.597434 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 01:01:19.597452 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 01:01:19.597469 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 01:01:19.597490 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:01:19.597502 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:01:19.597514 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:01:19.597526 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:01:19.597538 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:01:19.597550 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:01:19.597561 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:01:19.597573 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:01:19.597587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:01:19.597599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:01:19.597611 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:01:19.597627 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:01:19.597639 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:01:19.597651 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:01:19.597663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:01:19.597677 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:01:19.597695 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:01:19.597712 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:01:19.597724 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:01:19.597736 systemd[1]: Reached target machines.target - Containers. Oct 9 01:01:19.597748 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:01:19.597760 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:01:19.597772 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:01:19.597784 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:01:19.597796 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:01:19.597808 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:01:19.597822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:01:19.597834 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:01:19.597845 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:01:19.597857 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:01:19.597869 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 01:01:19.597881 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 01:01:19.597893 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 01:01:19.597905 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 01:01:19.597919 kernel: fuse: init (API version 7.39) Oct 9 01:01:19.597931 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:01:19.597942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:01:19.597955 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:01:19.597967 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:01:19.597979 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:01:19.597991 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 01:01:19.598003 systemd[1]: Stopped verity-setup.service. Oct 9 01:01:19.598015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:01:19.598030 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:01:19.598043 kernel: loop: module loaded Oct 9 01:01:19.598054 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:01:19.598066 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:01:19.598078 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:01:19.598092 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:01:19.598104 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:01:19.598136 systemd-journald[1131]: Collecting audit messages is disabled. Oct 9 01:01:19.598165 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:01:19.598178 kernel: ACPI: bus type drm_connector registered Oct 9 01:01:19.598190 systemd-journald[1131]: Journal started Oct 9 01:01:19.598215 systemd-journald[1131]: Runtime Journal (/run/log/journal/81491c02292b4f82b9a9fb76057e4be8) is 6.0M, max 48.3M, 42.2M free. Oct 9 01:01:19.360163 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:01:19.382990 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 01:01:19.383457 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 01:01:19.602503 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:01:19.604206 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:01:19.604450 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:01:19.606221 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:01:19.606434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:01:19.608223 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:01:19.610140 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:01:19.610475 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:01:19.612203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:01:19.612398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:01:19.614251 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:01:19.614459 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:01:19.616178 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:01:19.616362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:01:19.618071 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:01:19.619812 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:01:19.621732 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:01:19.637632 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:01:19.649428 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:01:19.651874 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:01:19.653023 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:01:19.653056 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:01:19.655287 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:01:19.657735 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:01:19.660624 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:01:19.661864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:01:19.663618 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:01:19.668691 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:01:19.670064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:01:19.671636 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:01:19.673173 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:01:19.674538 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:01:19.680303 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:01:19.693444 systemd-journald[1131]: Time spent on flushing to /var/log/journal/81491c02292b4f82b9a9fb76057e4be8 is 33.731ms for 1019 entries. Oct 9 01:01:19.693444 systemd-journald[1131]: System Journal (/var/log/journal/81491c02292b4f82b9a9fb76057e4be8) is 8.0M, max 195.6M, 187.6M free. Oct 9 01:01:19.742511 systemd-journald[1131]: Received client request to flush runtime journal. Oct 9 01:01:19.742580 kernel: loop0: detected capacity change from 0 to 210664 Oct 9 01:01:19.690508 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:01:19.696663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:01:19.696936 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:01:19.700511 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:01:19.702843 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:01:19.712403 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:01:19.714617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:01:19.718937 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:01:19.735550 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:01:19.743460 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:01:19.748874 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:01:19.750993 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:01:19.762791 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 01:01:19.764573 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:01:19.774477 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:01:19.777478 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:01:19.778395 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:01:19.789434 kernel: loop1: detected capacity change from 0 to 138192 Oct 9 01:01:19.803577 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Oct 9 01:01:19.803601 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Oct 9 01:01:19.810205 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:01:19.832350 kernel: loop2: detected capacity change from 0 to 140992 Oct 9 01:01:19.867369 kernel: loop3: detected capacity change from 0 to 210664 Oct 9 01:01:19.877369 kernel: loop4: detected capacity change from 0 to 138192 Oct 9 01:01:19.889354 kernel: loop5: detected capacity change from 0 to 140992 Oct 9 01:01:19.899655 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 01:01:19.900232 (sd-merge)[1197]: Merged extensions into '/usr'. Oct 9 01:01:19.903893 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:01:19.903909 systemd[1]: Reloading... Oct 9 01:01:19.953358 zram_generator::config[1223]: No configuration found. Oct 9 01:01:20.013447 ldconfig[1166]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:01:20.086872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:01:20.136439 systemd[1]: Reloading finished in 232 ms. Oct 9 01:01:20.173821 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:01:20.175616 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:01:20.187559 systemd[1]: Starting ensure-sysext.service... Oct 9 01:01:20.191594 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:01:20.196357 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:01:20.196373 systemd[1]: Reloading... Oct 9 01:01:20.226371 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:01:20.226774 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:01:20.227797 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:01:20.228110 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 9 01:01:20.228184 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 9 01:01:20.234175 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:01:20.234245 systemd-tmpfiles[1261]: Skipping /boot Oct 9 01:01:20.248996 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:01:20.249015 systemd-tmpfiles[1261]: Skipping /boot Oct 9 01:01:20.251335 zram_generator::config[1287]: No configuration found. Oct 9 01:01:20.360187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:01:20.410197 systemd[1]: Reloading finished in 213 ms. Oct 9 01:01:20.428890 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:01:20.441354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:01:20.450375 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:01:20.452941 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:01:20.455411 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:01:20.460938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:01:20.465085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:01:20.467876 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:01:20.471595 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:01:20.471769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:01:20.473490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:01:20.477650 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:01:20.484772 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:01:20.486584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:01:20.491613 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:01:20.493437 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:01:20.494729 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:01:20.496804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:01:20.496981 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:01:20.498869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:01:20.499033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:01:20.505592 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:01:20.506095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:01:20.509004 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Oct 9 01:01:20.515377 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:01:20.518781 augenrules[1360]: No rules Oct 9 01:01:20.520210 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:01:20.520485 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:01:20.523105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:01:20.523661 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:01:20.537638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:01:20.543190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:01:20.551594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:01:20.553275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:01:20.556794 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:01:20.558315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:01:20.559530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:01:20.562277 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:01:20.565657 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:01:20.567582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:01:20.567769 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:01:20.569602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:01:20.569830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:01:20.571761 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:01:20.571951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:01:20.579007 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:01:20.588082 systemd[1]: Finished ensure-sysext.service. Oct 9 01:01:20.594335 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 01:01:20.594603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:01:20.599345 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1372) Oct 9 01:01:20.606505 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:01:20.609672 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1385) Oct 9 01:01:20.608648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:01:20.610473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:01:20.615467 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:01:20.619250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:01:20.622690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:01:20.625342 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:01:20.628024 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:01:20.635463 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:01:20.636920 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:01:20.636948 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:01:20.638010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:01:20.638187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:01:20.648338 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1372) Oct 9 01:01:20.642009 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:01:20.642172 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:01:20.643613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:01:20.643772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:01:20.645400 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:01:20.646366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:01:20.666901 augenrules[1401]: /sbin/augenrules: No change Oct 9 01:01:20.665390 systemd-resolved[1329]: Positive Trust Anchors: Oct 9 01:01:20.665402 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:01:20.665449 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:01:20.671031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:01:20.675275 systemd-resolved[1329]: Defaulting to hostname 'linux'. Oct 9 01:01:20.675911 augenrules[1436]: No rules Oct 9 01:01:20.679479 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:01:20.680698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:01:20.680763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:01:20.680959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:01:20.683010 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:01:20.683249 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:01:20.685723 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:01:20.691348 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 01:01:20.699334 kernel: ACPI: button: Power Button [PWRF] Oct 9 01:01:20.705174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:01:20.713335 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 01:01:20.722066 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 9 01:01:20.722416 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 01:01:20.722626 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 01:01:20.722804 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 01:01:20.747634 systemd-networkd[1414]: lo: Link UP Oct 9 01:01:20.747645 systemd-networkd[1414]: lo: Gained carrier Oct 9 01:01:20.751413 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:01:20.751458 systemd-networkd[1414]: Enumeration completed Oct 9 01:01:20.752168 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:20.752173 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:01:20.753024 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:01:20.753525 systemd-networkd[1414]: eth0: Link UP Oct 9 01:01:20.753542 systemd-networkd[1414]: eth0: Gained carrier Oct 9 01:01:20.753557 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:20.754675 systemd[1]: Reached target network.target - Network. Oct 9 01:01:20.756283 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:01:20.763458 systemd-networkd[1414]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:01:20.765451 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Oct 9 01:01:22.190282 systemd-resolved[1329]: Clock change detected. Flushing caches. Oct 9 01:01:22.190389 systemd-timesyncd[1417]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 01:01:22.190429 systemd-timesyncd[1417]: Initial clock synchronization to Wed 2024-10-09 01:01:22.190222 UTC. Oct 9 01:01:22.190533 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:01:22.200775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:01:22.207205 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 01:01:22.268517 kernel: kvm_amd: TSC scaling supported Oct 9 01:01:22.268580 kernel: kvm_amd: Nested Virtualization enabled Oct 9 01:01:22.268625 kernel: kvm_amd: Nested Paging enabled Oct 9 01:01:22.269529 kernel: kvm_amd: LBR virtualization supported Oct 9 01:01:22.269596 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 01:01:22.270541 kernel: kvm_amd: Virtual GIF supported Oct 9 01:01:22.290255 kernel: EDAC MC: Ver: 3.0.0 Oct 9 01:01:22.317886 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:01:22.319648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:22.334364 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:01:22.343055 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:01:22.379354 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:01:22.380937 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:01:22.382057 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:01:22.383259 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:01:22.384526 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:01:22.385965 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:01:22.387193 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:01:22.388484 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:01:22.389747 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:01:22.389776 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:01:22.390674 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:01:22.392559 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:01:22.395386 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:01:22.404790 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:01:22.407203 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:01:22.408803 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:01:22.409946 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:01:22.410913 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:01:22.411871 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:01:22.411899 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:01:22.412868 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:01:22.414919 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:01:22.418198 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:01:22.418611 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:01:22.422360 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:01:22.423394 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:01:22.425420 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:01:22.428289 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:01:22.433435 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:01:22.435069 jq[1468]: false Oct 9 01:01:22.435763 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:01:22.440351 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:01:22.441826 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:01:22.442274 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:01:22.443077 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:01:22.447455 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:01:22.450230 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:01:22.452563 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:01:22.452788 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:01:22.459007 jq[1480]: true Oct 9 01:01:22.458838 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:01:22.459070 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:01:22.461170 extend-filesystems[1469]: Found loop3 Oct 9 01:01:22.461170 extend-filesystems[1469]: Found loop4 Oct 9 01:01:22.461170 extend-filesystems[1469]: Found loop5 Oct 9 01:01:22.461170 extend-filesystems[1469]: Found sr0 Oct 9 01:01:22.461170 extend-filesystems[1469]: Found vda Oct 9 01:01:22.461170 extend-filesystems[1469]: Found vda1 Oct 9 01:01:22.461170 extend-filesystems[1469]: Found vda2 Oct 9 01:01:22.461170 extend-filesystems[1469]: Found vda3 Oct 9 01:01:22.478388 extend-filesystems[1469]: Found usr Oct 9 01:01:22.478388 extend-filesystems[1469]: Found vda4 Oct 9 01:01:22.478388 extend-filesystems[1469]: Found vda6 Oct 9 01:01:22.478388 extend-filesystems[1469]: Found vda7 Oct 9 01:01:22.478388 extend-filesystems[1469]: Found vda9 Oct 9 01:01:22.478388 extend-filesystems[1469]: Checking size of /dev/vda9 Oct 9 01:01:22.484164 update_engine[1478]: I20241009 01:01:22.478001 1478 main.cc:92] Flatcar Update Engine starting Oct 9 01:01:22.484164 update_engine[1478]: I20241009 01:01:22.483556 1478 update_check_scheduler.cc:74] Next update check in 8m4s Oct 9 01:01:22.467976 dbus-daemon[1467]: [system] SELinux support is enabled Oct 9 01:01:22.469589 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:01:22.484742 extend-filesystems[1469]: Resized partition /dev/vda9 Oct 9 01:01:22.476127 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:01:22.478327 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:01:22.485969 jq[1491]: true Oct 9 01:01:22.490215 extend-filesystems[1497]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:01:22.497220 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1385) Oct 9 01:01:22.505233 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 01:01:22.509549 tar[1485]: linux-amd64/helm Oct 9 01:01:22.507557 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:01:22.511068 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:01:22.514701 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:01:22.514791 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:01:22.516621 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:01:22.516641 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:01:22.526417 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:01:22.531224 systemd-logind[1477]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 01:01:22.531262 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 01:01:22.532451 systemd-logind[1477]: New seat seat0. Oct 9 01:01:22.536430 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:01:22.549216 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 01:01:22.578731 extend-filesystems[1497]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 01:01:22.578731 extend-filesystems[1497]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 01:01:22.578731 extend-filesystems[1497]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 01:01:22.585645 extend-filesystems[1469]: Resized filesystem in /dev/vda9 Oct 9 01:01:22.581729 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:01:22.581955 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:01:22.590058 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:01:22.595579 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:01:22.597417 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:01:22.600128 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 01:01:22.705931 containerd[1499]: time="2024-10-09T01:01:22.705690698Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:01:22.715324 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:01:22.729403 containerd[1499]: time="2024-10-09T01:01:22.729350801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731037 containerd[1499]: time="2024-10-09T01:01:22.730979184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731037 containerd[1499]: time="2024-10-09T01:01:22.731021874Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:01:22.731037 containerd[1499]: time="2024-10-09T01:01:22.731040429Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:01:22.731289 containerd[1499]: time="2024-10-09T01:01:22.731258808Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:01:22.731289 containerd[1499]: time="2024-10-09T01:01:22.731282623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731366 containerd[1499]: time="2024-10-09T01:01:22.731351492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731366 containerd[1499]: time="2024-10-09T01:01:22.731363685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731588 containerd[1499]: time="2024-10-09T01:01:22.731560324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731588 containerd[1499]: time="2024-10-09T01:01:22.731580802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731634 containerd[1499]: time="2024-10-09T01:01:22.731595600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731634 containerd[1499]: time="2024-10-09T01:01:22.731606200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731719 containerd[1499]: time="2024-10-09T01:01:22.731699174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:22.731958 containerd[1499]: time="2024-10-09T01:01:22.731931981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:22.732092 containerd[1499]: time="2024-10-09T01:01:22.732072454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:22.732092 containerd[1499]: time="2024-10-09T01:01:22.732089526Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:01:22.732220 containerd[1499]: time="2024-10-09T01:01:22.732201446Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:01:22.732287 containerd[1499]: time="2024-10-09T01:01:22.732269574Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:01:22.739513 containerd[1499]: time="2024-10-09T01:01:22.739063953Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:01:22.739680 containerd[1499]: time="2024-10-09T01:01:22.739659810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.739733208Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.739755961Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.739773694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.739917774Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740125894Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740248875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740288178Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740301874Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740322312Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740336329Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740348131Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740361496Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740375312Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:01:22.742034 containerd[1499]: time="2024-10-09T01:01:22.740388096Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:01:22.740733 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740400239Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740410858Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740431828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740451565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740463788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740476231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740487823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740500507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740512369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740522999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740535091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740548256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740559948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742382 containerd[1499]: time="2024-10-09T01:01:22.740572231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740583963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740598420Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740616224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740631793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740642012Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740683640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740700802Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740710711Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740722853Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740731510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740742701Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740752970Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:01:22.742626 containerd[1499]: time="2024-10-09T01:01:22.740766816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.741013458Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.741053153Z" level=info msg="Connect containerd service" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.741080825Z" level=info msg="using legacy CRI server" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.741086826Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.741175131Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.741702771Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.742000459Z" level=info msg="Start subscribing containerd event" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.742062375Z" level=info msg="Start recovering state" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.742128790Z" level=info msg="Start event monitor" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.742146102Z" level=info msg="Start snapshots syncer" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.742155981Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.742163935Z" level=info msg="Start streaming server" Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.742005599Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.742362868Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:01:22.742853 containerd[1499]: time="2024-10-09T01:01:22.742417651Z" level=info msg="containerd successfully booted in 0.037692s" Oct 9 01:01:22.753415 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:01:22.754476 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:01:22.763089 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:01:22.763316 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:01:22.765931 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:01:22.769133 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:39922.service - OpenSSH per-connection server daemon (10.0.0.1:39922). Oct 9 01:01:22.771802 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:01:22.787840 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:01:22.795620 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:01:22.797927 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 01:01:22.799276 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:01:22.817843 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 39922 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:01:22.820103 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:22.828363 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:01:22.838402 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:01:22.841822 systemd-logind[1477]: New session 1 of user core. Oct 9 01:01:22.851834 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:01:22.859433 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:01:22.865236 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:01:22.920493 tar[1485]: linux-amd64/LICENSE Oct 9 01:01:22.920575 tar[1485]: linux-amd64/README.md Oct 9 01:01:22.937584 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:01:22.972747 systemd[1559]: Queued start job for default target default.target. Oct 9 01:01:22.981473 systemd[1559]: Created slice app.slice - User Application Slice. Oct 9 01:01:22.981498 systemd[1559]: Reached target paths.target - Paths. Oct 9 01:01:22.981512 systemd[1559]: Reached target timers.target - Timers. Oct 9 01:01:22.983007 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:01:22.993639 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:01:22.993760 systemd[1559]: Reached target sockets.target - Sockets. Oct 9 01:01:22.993779 systemd[1559]: Reached target basic.target - Basic System. Oct 9 01:01:22.993814 systemd[1559]: Reached target default.target - Main User Target. Oct 9 01:01:22.993845 systemd[1559]: Startup finished in 121ms. Oct 9 01:01:22.994357 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:01:22.996931 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:01:23.058699 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:39932.service - OpenSSH per-connection server daemon (10.0.0.1:39932). Oct 9 01:01:23.094104 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 39932 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:01:23.095538 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:23.099393 systemd-logind[1477]: New session 2 of user core. Oct 9 01:01:23.109312 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:01:23.163403 sshd[1573]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:23.176709 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:39932.service: Deactivated successfully. Oct 9 01:01:23.178381 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:01:23.179822 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:01:23.180978 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:39936.service - OpenSSH per-connection server daemon (10.0.0.1:39936). Oct 9 01:01:23.183003 systemd-logind[1477]: Removed session 2. Oct 9 01:01:23.216324 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 39936 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:01:23.217982 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:23.221493 systemd-logind[1477]: New session 3 of user core. Oct 9 01:01:23.231304 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:01:23.284798 sshd[1580]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:23.288716 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:39936.service: Deactivated successfully. Oct 9 01:01:23.290486 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:01:23.291000 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:01:23.291739 systemd-logind[1477]: Removed session 3. Oct 9 01:01:24.038387 systemd-networkd[1414]: eth0: Gained IPv6LL Oct 9 01:01:24.041603 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:01:24.043608 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:01:24.055420 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 01:01:24.057863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:01:24.060089 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:01:24.078703 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 01:01:24.078947 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 01:01:24.080615 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:01:24.082063 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:01:24.668245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:24.669880 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:01:24.671153 systemd[1]: Startup finished in 705ms (kernel) + 5.123s (initrd) + 4.459s (userspace) = 10.287s. Oct 9 01:01:24.673828 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:01:25.104873 kubelet[1608]: E1009 01:01:25.104647 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:01:25.108772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:01:25.108972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:01:33.298789 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:41978.service - OpenSSH per-connection server daemon (10.0.0.1:41978). Oct 9 01:01:33.334439 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 41978 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:01:33.336024 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:33.339746 systemd-logind[1477]: New session 4 of user core. Oct 9 01:01:33.356309 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:01:33.410495 sshd[1622]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:33.429342 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:41978.service: Deactivated successfully. Oct 9 01:01:33.431124 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:01:33.432554 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:01:33.444419 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:41984.service - OpenSSH per-connection server daemon (10.0.0.1:41984). Oct 9 01:01:33.445263 systemd-logind[1477]: Removed session 4. Oct 9 01:01:33.476151 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 41984 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:01:33.477584 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:33.481222 systemd-logind[1477]: New session 5 of user core. Oct 9 01:01:33.491299 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:01:33.541255 sshd[1629]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:33.548767 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:41984.service: Deactivated successfully. Oct 9 01:01:33.551515 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:01:33.553625 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:01:33.565516 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:41998.service - OpenSSH per-connection server daemon (10.0.0.1:41998). Oct 9 01:01:33.566472 systemd-logind[1477]: Removed session 5. Oct 9 01:01:33.595557 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 41998 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:01:33.597169 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:33.601414 systemd-logind[1477]: New session 6 of user core. Oct 9 01:01:33.611311 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:01:33.666993 sshd[1636]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:33.684628 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:41998.service: Deactivated successfully. Oct 9 01:01:33.686603 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:01:33.688357 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:01:33.689713 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:42000.service - OpenSSH per-connection server daemon (10.0.0.1:42000). Oct 9 01:01:33.690590 systemd-logind[1477]: Removed session 6. Oct 9 01:01:33.724104 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 42000 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:01:33.725727 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:33.730020 systemd-logind[1477]: New session 7 of user core. Oct 9 01:01:33.740307 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:01:33.799410 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:01:33.799825 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:01:33.816685 sudo[1646]: pam_unix(sudo:session): session closed for user root Oct 9 01:01:33.819019 sshd[1643]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:33.839359 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:42000.service: Deactivated successfully. Oct 9 01:01:33.841210 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:01:33.843069 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:01:33.851569 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:42008.service - OpenSSH per-connection server daemon (10.0.0.1:42008). Oct 9 01:01:33.852701 systemd-logind[1477]: Removed session 7. Oct 9 01:01:33.882860 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 42008 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:01:33.884632 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:33.888465 systemd-logind[1477]: New session 8 of user core. Oct 9 01:01:33.899319 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:01:33.952607 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:01:33.952937 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:01:33.956310 sudo[1655]: pam_unix(sudo:session): session closed for user root Oct 9 01:01:33.961757 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:01:33.962140 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:01:33.981490 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:01:34.008651 augenrules[1677]: No rules Oct 9 01:01:34.009504 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:01:34.009743 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:01:34.011021 sudo[1654]: pam_unix(sudo:session): session closed for user root Oct 9 01:01:34.012832 sshd[1651]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:34.026849 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:42008.service: Deactivated successfully. Oct 9 01:01:34.028394 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:01:34.029789 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:01:34.031006 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:42010.service - OpenSSH per-connection server daemon (10.0.0.1:42010). Oct 9 01:01:34.031858 systemd-logind[1477]: Removed session 8. Oct 9 01:01:34.065581 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 42010 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:01:34.067471 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:34.071502 systemd-logind[1477]: New session 9 of user core. Oct 9 01:01:34.086322 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:01:34.139671 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:01:34.139998 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:01:34.392382 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:01:34.392577 (dockerd)[1708]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:01:34.619596 dockerd[1708]: time="2024-10-09T01:01:34.619533160Z" level=info msg="Starting up" Oct 9 01:01:34.727994 dockerd[1708]: time="2024-10-09T01:01:34.727868669Z" level=info msg="Loading containers: start." Oct 9 01:01:34.913213 kernel: Initializing XFRM netlink socket Oct 9 01:01:34.992966 systemd-networkd[1414]: docker0: Link UP Oct 9 01:01:35.029665 dockerd[1708]: time="2024-10-09T01:01:35.029622086Z" level=info msg="Loading containers: done." Oct 9 01:01:35.042563 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1452109435-merged.mount: Deactivated successfully. Oct 9 01:01:35.044396 dockerd[1708]: time="2024-10-09T01:01:35.044354880Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:01:35.044475 dockerd[1708]: time="2024-10-09T01:01:35.044451311Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:01:35.044600 dockerd[1708]: time="2024-10-09T01:01:35.044574191Z" level=info msg="Daemon has completed initialization" Oct 9 01:01:35.082945 dockerd[1708]: time="2024-10-09T01:01:35.082866550Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:01:35.083104 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:01:35.359365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:01:35.365372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:01:35.519537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:35.525759 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:01:35.575644 kubelet[1915]: E1009 01:01:35.575570 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:01:35.584270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:01:35.584522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:01:36.008968 containerd[1499]: time="2024-10-09T01:01:36.008915161Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 9 01:01:37.269413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591582338.mount: Deactivated successfully. Oct 9 01:01:38.246394 containerd[1499]: time="2024-10-09T01:01:38.246320204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:38.246997 containerd[1499]: time="2024-10-09T01:01:38.246947340Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754097" Oct 9 01:01:38.248120 containerd[1499]: time="2024-10-09T01:01:38.248079353Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:38.250721 containerd[1499]: time="2024-10-09T01:01:38.250690839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:38.251696 containerd[1499]: time="2024-10-09T01:01:38.251660287Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 2.242699832s" Oct 9 01:01:38.251696 containerd[1499]: time="2024-10-09T01:01:38.251694461Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 9 01:01:38.271461 containerd[1499]: time="2024-10-09T01:01:38.271419075Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 9 01:01:40.194149 containerd[1499]: time="2024-10-09T01:01:40.194092166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:40.195150 containerd[1499]: time="2024-10-09T01:01:40.195119112Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591652" Oct 9 01:01:40.196598 containerd[1499]: time="2024-10-09T01:01:40.196576895Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:40.199928 containerd[1499]: time="2024-10-09T01:01:40.199886711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:40.201003 containerd[1499]: time="2024-10-09T01:01:40.200967618Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 1.9295096s" Oct 9 01:01:40.201063 containerd[1499]: time="2024-10-09T01:01:40.201003986Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 9 01:01:40.224986 containerd[1499]: time="2024-10-09T01:01:40.224942340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 9 01:01:41.209417 containerd[1499]: time="2024-10-09T01:01:41.209357108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:41.210453 containerd[1499]: time="2024-10-09T01:01:41.210411064Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17779987" Oct 9 01:01:41.211932 containerd[1499]: time="2024-10-09T01:01:41.211900507Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:41.214738 containerd[1499]: time="2024-10-09T01:01:41.214710846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:41.215991 containerd[1499]: time="2024-10-09T01:01:41.215957754Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 990.973034ms" Oct 9 01:01:41.216050 containerd[1499]: time="2024-10-09T01:01:41.215989974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 9 01:01:41.238710 containerd[1499]: time="2024-10-09T01:01:41.238645584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 9 01:01:42.309050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601725507.mount: Deactivated successfully. Oct 9 01:01:43.110877 containerd[1499]: time="2024-10-09T01:01:43.110802227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:43.111697 containerd[1499]: time="2024-10-09T01:01:43.111616594Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039362" Oct 9 01:01:43.112854 containerd[1499]: time="2024-10-09T01:01:43.112818487Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:43.114886 containerd[1499]: time="2024-10-09T01:01:43.114838615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:43.115482 containerd[1499]: time="2024-10-09T01:01:43.115449450Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 1.87676273s" Oct 9 01:01:43.115512 containerd[1499]: time="2024-10-09T01:01:43.115480879Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 9 01:01:43.138756 containerd[1499]: time="2024-10-09T01:01:43.138707339Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:01:43.752673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796103099.mount: Deactivated successfully. Oct 9 01:01:44.421479 containerd[1499]: time="2024-10-09T01:01:44.421418021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:44.422323 containerd[1499]: time="2024-10-09T01:01:44.422288704Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 01:01:44.425236 containerd[1499]: time="2024-10-09T01:01:44.424096353Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:44.428587 containerd[1499]: time="2024-10-09T01:01:44.428559161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:44.429641 containerd[1499]: time="2024-10-09T01:01:44.429599922Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.29084796s" Oct 9 01:01:44.429678 containerd[1499]: time="2024-10-09T01:01:44.429640278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 01:01:44.449739 containerd[1499]: time="2024-10-09T01:01:44.449694510Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:01:45.000851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2465177772.mount: Deactivated successfully. Oct 9 01:01:45.008360 containerd[1499]: time="2024-10-09T01:01:45.008301137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:45.009203 containerd[1499]: time="2024-10-09T01:01:45.009144809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 01:01:45.010543 containerd[1499]: time="2024-10-09T01:01:45.010490091Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:45.012966 containerd[1499]: time="2024-10-09T01:01:45.012928403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:45.013722 containerd[1499]: time="2024-10-09T01:01:45.013685212Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 563.950477ms" Oct 9 01:01:45.013772 containerd[1499]: time="2024-10-09T01:01:45.013721700Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 01:01:45.039299 containerd[1499]: time="2024-10-09T01:01:45.039242552Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 9 01:01:45.641222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:01:45.648369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:01:45.652721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270183471.mount: Deactivated successfully. Oct 9 01:01:45.799919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:45.805612 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:01:45.894944 kubelet[2102]: E1009 01:01:45.894759 2102 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:01:45.899702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:01:45.899907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:01:49.159047 containerd[1499]: time="2024-10-09T01:01:49.158934814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:49.160412 containerd[1499]: time="2024-10-09T01:01:49.160347793Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Oct 9 01:01:49.166463 containerd[1499]: time="2024-10-09T01:01:49.166379523Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:49.171895 containerd[1499]: time="2024-10-09T01:01:49.171825133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:49.173144 containerd[1499]: time="2024-10-09T01:01:49.172603513Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.133316217s" Oct 9 01:01:49.173144 containerd[1499]: time="2024-10-09T01:01:49.172673063Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 9 01:01:52.151436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:52.161445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:01:52.181457 systemd[1]: Reloading requested from client PID 2236 ('systemctl') (unit session-9.scope)... Oct 9 01:01:52.181474 systemd[1]: Reloading... Oct 9 01:01:52.257369 zram_generator::config[2278]: No configuration found. Oct 9 01:01:52.577258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:01:52.658753 systemd[1]: Reloading finished in 476 ms. Oct 9 01:01:52.713238 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 01:01:52.713350 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 01:01:52.713702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:52.716460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:01:52.869605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:52.876342 (kubelet)[2324]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:01:52.971997 kubelet[2324]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:01:52.971997 kubelet[2324]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:01:52.971997 kubelet[2324]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:01:52.972418 kubelet[2324]: I1009 01:01:52.972046 2324 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:01:53.176214 kubelet[2324]: I1009 01:01:53.176086 2324 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:01:53.176214 kubelet[2324]: I1009 01:01:53.176114 2324 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:01:53.176386 kubelet[2324]: I1009 01:01:53.176371 2324 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:01:53.190214 kubelet[2324]: E1009 01:01:53.190153 2324 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:53.191406 kubelet[2324]: I1009 01:01:53.191360 2324 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:01:53.206216 kubelet[2324]: I1009 01:01:53.206167 2324 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:01:53.207420 kubelet[2324]: I1009 01:01:53.207381 2324 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:01:53.207564 kubelet[2324]: I1009 01:01:53.207406 2324 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:01:53.208007 kubelet[2324]: I1009 01:01:53.207984 2324 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:01:53.208007 kubelet[2324]: I1009 01:01:53.208002 2324 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:01:53.208152 kubelet[2324]: I1009 01:01:53.208131 2324 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:01:53.208766 kubelet[2324]: I1009 01:01:53.208742 2324 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:01:53.208766 kubelet[2324]: I1009 01:01:53.208763 2324 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:01:53.208817 kubelet[2324]: I1009 01:01:53.208787 2324 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:01:53.208817 kubelet[2324]: I1009 01:01:53.208814 2324 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:01:53.209534 kubelet[2324]: W1009 01:01:53.209454 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:53.209534 kubelet[2324]: E1009 01:01:53.209508 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:53.209835 kubelet[2324]: W1009 01:01:53.209791 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:53.209835 kubelet[2324]: E1009 01:01:53.209835 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:53.212860 kubelet[2324]: I1009 01:01:53.212837 2324 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:01:53.214090 kubelet[2324]: I1009 01:01:53.214068 2324 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:01:53.214160 kubelet[2324]: W1009 01:01:53.214123 2324 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:01:53.215122 kubelet[2324]: I1009 01:01:53.214767 2324 server.go:1264] "Started kubelet" Oct 9 01:01:53.215122 kubelet[2324]: I1009 01:01:53.214892 2324 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:01:53.215857 kubelet[2324]: I1009 01:01:53.215319 2324 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:01:53.215857 kubelet[2324]: I1009 01:01:53.215377 2324 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:01:53.217273 kubelet[2324]: I1009 01:01:53.216150 2324 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:01:53.217273 kubelet[2324]: I1009 01:01:53.216434 2324 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:01:53.218046 kubelet[2324]: E1009 01:01:53.218001 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:53.218096 kubelet[2324]: I1009 01:01:53.218071 2324 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:01:53.218205 kubelet[2324]: I1009 01:01:53.218174 2324 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:01:53.218268 kubelet[2324]: I1009 01:01:53.218252 2324 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:01:53.218645 kubelet[2324]: W1009 01:01:53.218601 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:53.218690 kubelet[2324]: E1009 01:01:53.218650 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:53.219053 kubelet[2324]: E1009 01:01:53.219011 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Oct 9 01:01:53.220517 kubelet[2324]: I1009 01:01:53.219770 2324 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:01:53.220517 kubelet[2324]: I1009 01:01:53.219857 2324 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:01:53.220608 kubelet[2324]: E1009 01:01:53.220542 2324 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:01:53.221145 kubelet[2324]: I1009 01:01:53.220876 2324 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:01:53.221145 kubelet[2324]: E1009 01:01:53.220789 2324 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca31f9f437f14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:01:53.214742292 +0000 UTC m=+0.333497228,LastTimestamp:2024-10-09 01:01:53.214742292 +0000 UTC m=+0.333497228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:01:53.239008 kubelet[2324]: I1009 01:01:53.238918 2324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:01:53.240577 kubelet[2324]: I1009 01:01:53.240518 2324 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:01:53.240577 kubelet[2324]: I1009 01:01:53.240536 2324 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:01:53.240577 kubelet[2324]: I1009 01:01:53.240551 2324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:01:53.240577 kubelet[2324]: I1009 01:01:53.240568 2324 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:01:53.240755 kubelet[2324]: I1009 01:01:53.240595 2324 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:01:53.240755 kubelet[2324]: I1009 01:01:53.240621 2324 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:01:53.240755 kubelet[2324]: E1009 01:01:53.240678 2324 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:01:53.241826 kubelet[2324]: W1009 01:01:53.241718 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:53.241826 kubelet[2324]: E1009 01:01:53.241797 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:53.279533 kubelet[2324]: I1009 01:01:53.279487 2324 policy_none.go:49] "None policy: Start" Oct 9 01:01:53.280308 kubelet[2324]: I1009 01:01:53.280260 2324 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:01:53.280308 kubelet[2324]: I1009 01:01:53.280286 2324 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:01:53.287722 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:01:53.301323 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:01:53.304406 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:01:53.316063 kubelet[2324]: I1009 01:01:53.316018 2324 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:01:53.316373 kubelet[2324]: I1009 01:01:53.316257 2324 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:01:53.316508 kubelet[2324]: I1009 01:01:53.316448 2324 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:01:53.317460 kubelet[2324]: E1009 01:01:53.317429 2324 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 01:01:53.319711 kubelet[2324]: I1009 01:01:53.319680 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:01:53.320061 kubelet[2324]: E1009 01:01:53.320029 2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Oct 9 01:01:53.341255 kubelet[2324]: I1009 01:01:53.341218 2324 topology_manager.go:215] "Topology Admit Handler" podUID="c2d5375d66e657a5cb89382eec63bafe" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:01:53.342311 kubelet[2324]: I1009 01:01:53.342262 2324 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:01:53.342984 kubelet[2324]: I1009 01:01:53.342949 2324 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:01:53.348973 systemd[1]: Created slice kubepods-burstable-podc2d5375d66e657a5cb89382eec63bafe.slice - libcontainer container kubepods-burstable-podc2d5375d66e657a5cb89382eec63bafe.slice. Oct 9 01:01:53.362940 systemd[1]: Created slice kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice - libcontainer container kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice. Oct 9 01:01:53.381885 systemd[1]: Created slice kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice - libcontainer container kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice. Oct 9 01:01:53.418904 kubelet[2324]: I1009 01:01:53.418860 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:53.418904 kubelet[2324]: I1009 01:01:53.418890 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2d5375d66e657a5cb89382eec63bafe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2d5375d66e657a5cb89382eec63bafe\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:01:53.418904 kubelet[2324]: I1009 01:01:53.418906 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2d5375d66e657a5cb89382eec63bafe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2d5375d66e657a5cb89382eec63bafe\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:01:53.418904 kubelet[2324]: I1009 01:01:53.418920 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:53.419126 kubelet[2324]: I1009 01:01:53.418936 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:53.419126 kubelet[2324]: I1009 01:01:53.418951 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:53.419126 kubelet[2324]: I1009 01:01:53.419052 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:01:53.419126 kubelet[2324]: I1009 01:01:53.419095 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2d5375d66e657a5cb89382eec63bafe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2d5375d66e657a5cb89382eec63bafe\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:01:53.419126 kubelet[2324]: I1009 01:01:53.419121 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:53.419407 kubelet[2324]: E1009 01:01:53.419374 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Oct 9 01:01:53.521583 kubelet[2324]: I1009 01:01:53.521471 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:01:53.521914 kubelet[2324]: E1009 01:01:53.521869 2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Oct 9 01:01:53.660830 kubelet[2324]: E1009 01:01:53.660750 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:53.661476 containerd[1499]: time="2024-10-09T01:01:53.661439279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2d5375d66e657a5cb89382eec63bafe,Namespace:kube-system,Attempt:0,}" Oct 9 01:01:53.679709 kubelet[2324]: E1009 01:01:53.679658 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:53.680120 containerd[1499]: time="2024-10-09T01:01:53.680082485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,}" Oct 9 01:01:53.684505 kubelet[2324]: E1009 01:01:53.684482 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:53.684927 containerd[1499]: time="2024-10-09T01:01:53.684872576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,}" Oct 9 01:01:53.820867 kubelet[2324]: E1009 01:01:53.820743 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Oct 9 01:01:53.923320 kubelet[2324]: I1009 01:01:53.923263 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:01:53.923703 kubelet[2324]: E1009 01:01:53.923669 2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Oct 9 01:01:54.106852 kubelet[2324]: W1009 01:01:54.106776 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:54.106852 kubelet[2324]: E1009 01:01:54.106848 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:54.266952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796864154.mount: Deactivated successfully. Oct 9 01:01:54.275789 containerd[1499]: time="2024-10-09T01:01:54.275728401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:01:54.278708 containerd[1499]: time="2024-10-09T01:01:54.278649888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 01:01:54.279826 containerd[1499]: time="2024-10-09T01:01:54.279780458Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:01:54.280820 containerd[1499]: time="2024-10-09T01:01:54.280789190Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:01:54.281888 containerd[1499]: time="2024-10-09T01:01:54.281802079Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:01:54.282930 containerd[1499]: time="2024-10-09T01:01:54.282865593Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:01:54.283938 containerd[1499]: time="2024-10-09T01:01:54.283841883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:01:54.285745 containerd[1499]: time="2024-10-09T01:01:54.285713442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:01:54.287529 containerd[1499]: time="2024-10-09T01:01:54.287492869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 607.332528ms" Oct 9 01:01:54.288151 containerd[1499]: time="2024-10-09T01:01:54.288124573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.591178ms" Oct 9 01:01:54.291359 containerd[1499]: time="2024-10-09T01:01:54.291314444Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 606.391564ms" Oct 9 01:01:54.577881 containerd[1499]: time="2024-10-09T01:01:54.575978499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:54.577881 containerd[1499]: time="2024-10-09T01:01:54.577693084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:54.577881 containerd[1499]: time="2024-10-09T01:01:54.577708313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:54.577881 containerd[1499]: time="2024-10-09T01:01:54.577781069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:54.577881 containerd[1499]: time="2024-10-09T01:01:54.576450003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:54.577881 containerd[1499]: time="2024-10-09T01:01:54.576533871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:54.577881 containerd[1499]: time="2024-10-09T01:01:54.576545262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:54.577881 containerd[1499]: time="2024-10-09T01:01:54.576646692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:54.600702 kubelet[2324]: W1009 01:01:54.600614 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:54.600702 kubelet[2324]: E1009 01:01:54.600697 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:54.621371 kubelet[2324]: E1009 01:01:54.621319 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" Oct 9 01:01:54.664913 kubelet[2324]: W1009 01:01:54.664844 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:54.664913 kubelet[2324]: E1009 01:01:54.664896 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:54.725683 kubelet[2324]: I1009 01:01:54.725649 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:01:54.726026 kubelet[2324]: E1009 01:01:54.725999 2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Oct 9 01:01:54.755356 systemd[1]: Started cri-containerd-e8a2729c40f02892aef7bf2d0e784992a2796f4b762bfe12d7a3b3ca1da9f2e1.scope - libcontainer container e8a2729c40f02892aef7bf2d0e784992a2796f4b762bfe12d7a3b3ca1da9f2e1. Oct 9 01:01:54.756963 systemd[1]: Started cri-containerd-ecdf63912539cb2dc7147de6a583e9dc7a90b7612de673f839be018e6737171c.scope - libcontainer container ecdf63912539cb2dc7147de6a583e9dc7a90b7612de673f839be018e6737171c. Oct 9 01:01:54.786939 kubelet[2324]: W1009 01:01:54.786694 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:54.786939 kubelet[2324]: E1009 01:01:54.786734 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Oct 9 01:01:54.796128 containerd[1499]: time="2024-10-09T01:01:54.796087263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8a2729c40f02892aef7bf2d0e784992a2796f4b762bfe12d7a3b3ca1da9f2e1\"" Oct 9 01:01:54.796884 kubelet[2324]: E1009 01:01:54.796861 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:54.799520 containerd[1499]: time="2024-10-09T01:01:54.799490935Z" level=info msg="CreateContainer within sandbox \"e8a2729c40f02892aef7bf2d0e784992a2796f4b762bfe12d7a3b3ca1da9f2e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:01:54.806149 containerd[1499]: time="2024-10-09T01:01:54.806108203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2d5375d66e657a5cb89382eec63bafe,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecdf63912539cb2dc7147de6a583e9dc7a90b7612de673f839be018e6737171c\"" Oct 9 01:01:54.806724 kubelet[2324]: E1009 01:01:54.806701 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:54.809065 containerd[1499]: time="2024-10-09T01:01:54.809011256Z" level=info msg="CreateContainer within sandbox \"ecdf63912539cb2dc7147de6a583e9dc7a90b7612de673f839be018e6737171c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:01:54.849857 containerd[1499]: time="2024-10-09T01:01:54.849594650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:54.849857 containerd[1499]: time="2024-10-09T01:01:54.849643772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:54.849857 containerd[1499]: time="2024-10-09T01:01:54.849659752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:54.849857 containerd[1499]: time="2024-10-09T01:01:54.849754640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:54.864598 containerd[1499]: time="2024-10-09T01:01:54.864480180Z" level=info msg="CreateContainer within sandbox \"ecdf63912539cb2dc7147de6a583e9dc7a90b7612de673f839be018e6737171c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"40aeb7d67323b1d0a42ff2e6f909cd0c73d50ba9cdf308baff908d44c5b3528c\"" Oct 9 01:01:54.864958 containerd[1499]: time="2024-10-09T01:01:54.864925876Z" level=info msg="StartContainer for \"40aeb7d67323b1d0a42ff2e6f909cd0c73d50ba9cdf308baff908d44c5b3528c\"" Oct 9 01:01:54.865252 containerd[1499]: time="2024-10-09T01:01:54.865221159Z" level=info msg="CreateContainer within sandbox \"e8a2729c40f02892aef7bf2d0e784992a2796f4b762bfe12d7a3b3ca1da9f2e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c21a031964e551c3add7f4ab512ed67b0e93a145b744f0050b58234bf46f25a\"" Oct 9 01:01:54.866252 containerd[1499]: time="2024-10-09T01:01:54.865555566Z" level=info msg="StartContainer for \"5c21a031964e551c3add7f4ab512ed67b0e93a145b744f0050b58234bf46f25a\"" Oct 9 01:01:54.868340 systemd[1]: Started cri-containerd-3a2ca90b9a6f31e82be03168ca2b5c3661757ecc8e2091b95cb00c216287b1d4.scope - libcontainer container 3a2ca90b9a6f31e82be03168ca2b5c3661757ecc8e2091b95cb00c216287b1d4. Oct 9 01:01:54.895331 systemd[1]: Started cri-containerd-40aeb7d67323b1d0a42ff2e6f909cd0c73d50ba9cdf308baff908d44c5b3528c.scope - libcontainer container 40aeb7d67323b1d0a42ff2e6f909cd0c73d50ba9cdf308baff908d44c5b3528c. Oct 9 01:01:54.899062 systemd[1]: Started cri-containerd-5c21a031964e551c3add7f4ab512ed67b0e93a145b744f0050b58234bf46f25a.scope - libcontainer container 5c21a031964e551c3add7f4ab512ed67b0e93a145b744f0050b58234bf46f25a. Oct 9 01:01:54.915305 containerd[1499]: time="2024-10-09T01:01:54.915148403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a2ca90b9a6f31e82be03168ca2b5c3661757ecc8e2091b95cb00c216287b1d4\"" Oct 9 01:01:54.915931 kubelet[2324]: E1009 01:01:54.915913 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:54.918936 containerd[1499]: time="2024-10-09T01:01:54.918916228Z" level=info msg="CreateContainer within sandbox \"3a2ca90b9a6f31e82be03168ca2b5c3661757ecc8e2091b95cb00c216287b1d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:01:54.940060 containerd[1499]: time="2024-10-09T01:01:54.939995802Z" level=info msg="StartContainer for \"40aeb7d67323b1d0a42ff2e6f909cd0c73d50ba9cdf308baff908d44c5b3528c\" returns successfully" Oct 9 01:01:54.950213 containerd[1499]: time="2024-10-09T01:01:54.947732177Z" level=info msg="StartContainer for \"5c21a031964e551c3add7f4ab512ed67b0e93a145b744f0050b58234bf46f25a\" returns successfully" Oct 9 01:01:54.950213 containerd[1499]: time="2024-10-09T01:01:54.947755050Z" level=info msg="CreateContainer within sandbox \"3a2ca90b9a6f31e82be03168ca2b5c3661757ecc8e2091b95cb00c216287b1d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"40e6cbc75992b8913dc3041778840d09cbd65e47462320edb3adb98129fffecd\"" Oct 9 01:01:54.950213 containerd[1499]: time="2024-10-09T01:01:54.948439073Z" level=info msg="StartContainer for \"40e6cbc75992b8913dc3041778840d09cbd65e47462320edb3adb98129fffecd\"" Oct 9 01:01:54.982444 systemd[1]: Started cri-containerd-40e6cbc75992b8913dc3041778840d09cbd65e47462320edb3adb98129fffecd.scope - libcontainer container 40e6cbc75992b8913dc3041778840d09cbd65e47462320edb3adb98129fffecd. Oct 9 01:01:55.030894 containerd[1499]: time="2024-10-09T01:01:55.030846394Z" level=info msg="StartContainer for \"40e6cbc75992b8913dc3041778840d09cbd65e47462320edb3adb98129fffecd\" returns successfully" Oct 9 01:01:55.251644 kubelet[2324]: E1009 01:01:55.251442 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:55.253023 kubelet[2324]: E1009 01:01:55.253003 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:55.255126 kubelet[2324]: E1009 01:01:55.255096 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:55.999901 kubelet[2324]: E1009 01:01:55.999787 2324 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fca31f9f437f14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:01:53.214742292 +0000 UTC m=+0.333497228,LastTimestamp:2024-10-09 01:01:53.214742292 +0000 UTC m=+0.333497228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:01:56.053105 kubelet[2324]: E1009 01:01:56.052996 2324 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fca31f9f9bd398 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:01:53.220531096 +0000 UTC m=+0.339286042,LastTimestamp:2024-10-09 01:01:53.220531096 +0000 UTC m=+0.339286042,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:01:56.105717 kubelet[2324]: E1009 01:01:56.105608 2324 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fca31fa0b75c16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:01:53.239112726 +0000 UTC m=+0.357867662,LastTimestamp:2024-10-09 01:01:53.239112726 +0000 UTC m=+0.357867662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:01:56.224704 kubelet[2324]: E1009 01:01:56.224667 2324 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 01:01:56.256556 kubelet[2324]: E1009 01:01:56.256451 2324 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:56.295260 kubelet[2324]: E1009 01:01:56.295232 2324 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 9 01:01:56.327419 kubelet[2324]: I1009 01:01:56.327397 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:01:56.330752 kubelet[2324]: I1009 01:01:56.330733 2324 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:01:56.336001 kubelet[2324]: E1009 01:01:56.335975 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:56.436460 kubelet[2324]: E1009 01:01:56.436420 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:56.536961 kubelet[2324]: E1009 01:01:56.536877 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:56.637326 kubelet[2324]: E1009 01:01:56.637306 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:56.737510 kubelet[2324]: E1009 01:01:56.737483 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:56.837932 kubelet[2324]: E1009 01:01:56.837911 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:56.938453 kubelet[2324]: E1009 01:01:56.938394 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:57.038956 kubelet[2324]: E1009 01:01:57.038903 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:57.140023 kubelet[2324]: E1009 01:01:57.139875 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:57.241045 kubelet[2324]: E1009 01:01:57.241004 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:57.341722 kubelet[2324]: E1009 01:01:57.341671 2324 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:01:57.889540 systemd[1]: Reloading requested from client PID 2603 ('systemctl') (unit session-9.scope)... Oct 9 01:01:57.889559 systemd[1]: Reloading... Oct 9 01:01:57.966325 zram_generator::config[2645]: No configuration found. Oct 9 01:01:58.070063 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:01:58.158751 systemd[1]: Reloading finished in 268 ms. Oct 9 01:01:58.202445 kubelet[2324]: I1009 01:01:58.202350 2324 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:01:58.202436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:01:58.225482 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:01:58.225799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:58.233538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:01:58.386714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:58.391902 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:01:58.434329 kubelet[2687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:01:58.434329 kubelet[2687]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:01:58.434329 kubelet[2687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:01:58.434329 kubelet[2687]: I1009 01:01:58.433948 2687 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:01:58.438765 kubelet[2687]: I1009 01:01:58.438732 2687 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:01:58.438765 kubelet[2687]: I1009 01:01:58.438760 2687 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:01:58.439002 kubelet[2687]: I1009 01:01:58.438981 2687 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:01:58.440218 kubelet[2687]: I1009 01:01:58.440194 2687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:01:58.441247 kubelet[2687]: I1009 01:01:58.441216 2687 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:01:58.448265 kubelet[2687]: I1009 01:01:58.448240 2687 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:01:58.448503 kubelet[2687]: I1009 01:01:58.448467 2687 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:01:58.448656 kubelet[2687]: I1009 01:01:58.448497 2687 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:01:58.448738 kubelet[2687]: I1009 01:01:58.448667 2687 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:01:58.448738 kubelet[2687]: I1009 01:01:58.448678 2687 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:01:58.448738 kubelet[2687]: I1009 01:01:58.448715 2687 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:01:58.448811 kubelet[2687]: I1009 01:01:58.448806 2687 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:01:58.448835 kubelet[2687]: I1009 01:01:58.448816 2687 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:01:58.448860 kubelet[2687]: I1009 01:01:58.448836 2687 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:01:58.448860 kubelet[2687]: I1009 01:01:58.448851 2687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:01:58.449475 kubelet[2687]: I1009 01:01:58.449451 2687 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:01:58.449670 kubelet[2687]: I1009 01:01:58.449593 2687 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:01:58.451252 kubelet[2687]: I1009 01:01:58.451221 2687 server.go:1264] "Started kubelet" Oct 9 01:01:58.455201 kubelet[2687]: I1009 01:01:58.454787 2687 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:01:58.455201 kubelet[2687]: I1009 01:01:58.455123 2687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:01:58.455853 kubelet[2687]: I1009 01:01:58.455836 2687 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:01:58.457521 kubelet[2687]: I1009 01:01:58.457492 2687 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:01:58.457722 kubelet[2687]: I1009 01:01:58.457709 2687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:01:58.460634 kubelet[2687]: I1009 01:01:58.460621 2687 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:01:58.460964 kubelet[2687]: I1009 01:01:58.460950 2687 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:01:58.462542 kubelet[2687]: I1009 01:01:58.462528 2687 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:01:58.464664 kubelet[2687]: E1009 01:01:58.464632 2687 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:01:58.464743 kubelet[2687]: I1009 01:01:58.464729 2687 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:01:58.464785 kubelet[2687]: I1009 01:01:58.464777 2687 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:01:58.464920 kubelet[2687]: I1009 01:01:58.464903 2687 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:01:58.470078 kubelet[2687]: I1009 01:01:58.470045 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:01:58.471541 kubelet[2687]: I1009 01:01:58.471488 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:01:58.471541 kubelet[2687]: I1009 01:01:58.471532 2687 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:01:58.471620 kubelet[2687]: I1009 01:01:58.471553 2687 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:01:58.471620 kubelet[2687]: E1009 01:01:58.471597 2687 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:01:58.500995 kubelet[2687]: I1009 01:01:58.500960 2687 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:01:58.500995 kubelet[2687]: I1009 01:01:58.500982 2687 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:01:58.500995 kubelet[2687]: I1009 01:01:58.500999 2687 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:01:58.501219 kubelet[2687]: I1009 01:01:58.501199 2687 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:01:58.501251 kubelet[2687]: I1009 01:01:58.501214 2687 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:01:58.501251 kubelet[2687]: I1009 01:01:58.501232 2687 policy_none.go:49] "None policy: Start" Oct 9 01:01:58.501765 kubelet[2687]: I1009 01:01:58.501746 2687 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:01:58.501765 kubelet[2687]: I1009 01:01:58.501766 2687 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:01:58.501914 kubelet[2687]: I1009 01:01:58.501897 2687 state_mem.go:75] "Updated machine memory state" Oct 9 01:01:58.506064 kubelet[2687]: I1009 01:01:58.506042 2687 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:01:58.506653 kubelet[2687]: I1009 01:01:58.506434 2687 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:01:58.506653 kubelet[2687]: I1009 01:01:58.506535 2687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:01:58.565458 kubelet[2687]: I1009 01:01:58.565429 2687 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:01:58.572817 kubelet[2687]: I1009 01:01:58.572743 2687 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:01:58.572951 kubelet[2687]: I1009 01:01:58.572867 2687 topology_manager.go:215] "Topology Admit Handler" podUID="c2d5375d66e657a5cb89382eec63bafe" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:01:58.572951 kubelet[2687]: I1009 01:01:58.572917 2687 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:01:58.573006 kubelet[2687]: I1009 01:01:58.572981 2687 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 01:01:58.573067 kubelet[2687]: I1009 01:01:58.573045 2687 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:01:58.664266 kubelet[2687]: I1009 01:01:58.664229 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2d5375d66e657a5cb89382eec63bafe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2d5375d66e657a5cb89382eec63bafe\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:01:58.664266 kubelet[2687]: I1009 01:01:58.664259 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:58.664266 kubelet[2687]: I1009 01:01:58.664278 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:58.664522 kubelet[2687]: I1009 01:01:58.664297 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2d5375d66e657a5cb89382eec63bafe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2d5375d66e657a5cb89382eec63bafe\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:01:58.664522 kubelet[2687]: I1009 01:01:58.664312 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2d5375d66e657a5cb89382eec63bafe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2d5375d66e657a5cb89382eec63bafe\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:01:58.664522 kubelet[2687]: I1009 01:01:58.664326 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:58.664522 kubelet[2687]: I1009 01:01:58.664364 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:58.664522 kubelet[2687]: I1009 01:01:58.664401 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:01:58.664635 kubelet[2687]: I1009 01:01:58.664426 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:01:58.883581 kubelet[2687]: E1009 01:01:58.883537 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:58.883717 kubelet[2687]: E1009 01:01:58.883619 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:58.883717 kubelet[2687]: E1009 01:01:58.883675 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:59.449926 kubelet[2687]: I1009 01:01:59.449845 2687 apiserver.go:52] "Watching apiserver" Oct 9 01:01:59.463344 kubelet[2687]: I1009 01:01:59.463296 2687 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:01:59.488902 kubelet[2687]: E1009 01:01:59.488442 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:59.488902 kubelet[2687]: E1009 01:01:59.488858 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:59.498450 kubelet[2687]: E1009 01:01:59.498383 2687 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 01:01:59.498930 kubelet[2687]: E1009 01:01:59.498906 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:01:59.534001 kubelet[2687]: I1009 01:01:59.533901 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.533855151 podStartE2EDuration="1.533855151s" podCreationTimestamp="2024-10-09 01:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:01:59.518946285 +0000 UTC m=+1.122837673" watchObservedRunningTime="2024-10-09 01:01:59.533855151 +0000 UTC m=+1.137746539" Oct 9 01:01:59.534222 kubelet[2687]: I1009 01:01:59.534088 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.534080353 podStartE2EDuration="1.534080353s" podCreationTimestamp="2024-10-09 01:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:01:59.533600603 +0000 UTC m=+1.137492001" watchObservedRunningTime="2024-10-09 01:01:59.534080353 +0000 UTC m=+1.137971741" Oct 9 01:01:59.548553 kubelet[2687]: I1009 01:01:59.548489 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.548469944 podStartE2EDuration="1.548469944s" podCreationTimestamp="2024-10-09 01:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:01:59.548271372 +0000 UTC m=+1.152162770" watchObservedRunningTime="2024-10-09 01:01:59.548469944 +0000 UTC m=+1.152361332" Oct 9 01:02:00.491118 kubelet[2687]: E1009 01:02:00.491082 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:02.624200 kubelet[2687]: E1009 01:02:02.624147 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:02.848568 sudo[1688]: pam_unix(sudo:session): session closed for user root Oct 9 01:02:02.850488 sshd[1685]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:02.855031 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:42010.service: Deactivated successfully. Oct 9 01:02:02.856856 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:02:02.857032 systemd[1]: session-9.scope: Consumed 5.049s CPU time, 187.8M memory peak, 0B memory swap peak. Oct 9 01:02:02.857569 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:02:02.858416 systemd-logind[1477]: Removed session 9. Oct 9 01:02:03.588928 kubelet[2687]: E1009 01:02:03.588873 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:05.870168 kubelet[2687]: E1009 01:02:05.870136 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:06.498834 kubelet[2687]: E1009 01:02:06.498807 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:07.385258 update_engine[1478]: I20241009 01:02:07.385196 1478 update_attempter.cc:509] Updating boot flags... Oct 9 01:02:07.411215 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2780) Oct 9 01:02:07.444919 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2781) Oct 9 01:02:07.478218 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2781) Oct 9 01:02:12.635278 kubelet[2687]: E1009 01:02:12.635238 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:12.685285 kubelet[2687]: I1009 01:02:12.685255 2687 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:02:12.685666 containerd[1499]: time="2024-10-09T01:02:12.685632811Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:02:12.686048 kubelet[2687]: I1009 01:02:12.685804 2687 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:02:13.592839 kubelet[2687]: E1009 01:02:13.592807 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:13.615994 kubelet[2687]: I1009 01:02:13.615946 2687 topology_manager.go:215] "Topology Admit Handler" podUID="4110b558-7d94-4d76-bfe3-f691ccbea6c4" podNamespace="kube-system" podName="kube-proxy-qlfgn" Oct 9 01:02:13.625336 systemd[1]: Created slice kubepods-besteffort-pod4110b558_7d94_4d76_bfe3_f691ccbea6c4.slice - libcontainer container kubepods-besteffort-pod4110b558_7d94_4d76_bfe3_f691ccbea6c4.slice. Oct 9 01:02:13.660530 kubelet[2687]: I1009 01:02:13.660490 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4110b558-7d94-4d76-bfe3-f691ccbea6c4-kube-proxy\") pod \"kube-proxy-qlfgn\" (UID: \"4110b558-7d94-4d76-bfe3-f691ccbea6c4\") " pod="kube-system/kube-proxy-qlfgn" Oct 9 01:02:13.660530 kubelet[2687]: I1009 01:02:13.660524 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4zg\" (UniqueName: \"kubernetes.io/projected/4110b558-7d94-4d76-bfe3-f691ccbea6c4-kube-api-access-qc4zg\") pod \"kube-proxy-qlfgn\" (UID: \"4110b558-7d94-4d76-bfe3-f691ccbea6c4\") " pod="kube-system/kube-proxy-qlfgn" Oct 9 01:02:13.660530 kubelet[2687]: I1009 01:02:13.660545 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4110b558-7d94-4d76-bfe3-f691ccbea6c4-xtables-lock\") pod \"kube-proxy-qlfgn\" (UID: \"4110b558-7d94-4d76-bfe3-f691ccbea6c4\") " pod="kube-system/kube-proxy-qlfgn" Oct 9 01:02:13.660996 kubelet[2687]: I1009 01:02:13.660560 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4110b558-7d94-4d76-bfe3-f691ccbea6c4-lib-modules\") pod \"kube-proxy-qlfgn\" (UID: \"4110b558-7d94-4d76-bfe3-f691ccbea6c4\") " pod="kube-system/kube-proxy-qlfgn" Oct 9 01:02:13.823937 kubelet[2687]: I1009 01:02:13.823891 2687 topology_manager.go:215] "Topology Admit Handler" podUID="4e9fe5ce-9599-4d0f-8eb5-fc6860e9ecfb" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-hsmbl" Oct 9 01:02:13.831467 systemd[1]: Created slice kubepods-besteffort-pod4e9fe5ce_9599_4d0f_8eb5_fc6860e9ecfb.slice - libcontainer container kubepods-besteffort-pod4e9fe5ce_9599_4d0f_8eb5_fc6860e9ecfb.slice. Oct 9 01:02:13.861865 kubelet[2687]: I1009 01:02:13.861746 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjw4v\" (UniqueName: \"kubernetes.io/projected/4e9fe5ce-9599-4d0f-8eb5-fc6860e9ecfb-kube-api-access-qjw4v\") pod \"tigera-operator-77f994b5bb-hsmbl\" (UID: \"4e9fe5ce-9599-4d0f-8eb5-fc6860e9ecfb\") " pod="tigera-operator/tigera-operator-77f994b5bb-hsmbl" Oct 9 01:02:13.861865 kubelet[2687]: I1009 01:02:13.861787 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e9fe5ce-9599-4d0f-8eb5-fc6860e9ecfb-var-lib-calico\") pod \"tigera-operator-77f994b5bb-hsmbl\" (UID: \"4e9fe5ce-9599-4d0f-8eb5-fc6860e9ecfb\") " pod="tigera-operator/tigera-operator-77f994b5bb-hsmbl" Oct 9 01:02:13.940312 kubelet[2687]: E1009 01:02:13.940285 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:13.940736 containerd[1499]: time="2024-10-09T01:02:13.940703165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qlfgn,Uid:4110b558-7d94-4d76-bfe3-f691ccbea6c4,Namespace:kube-system,Attempt:0,}" Oct 9 01:02:13.967257 containerd[1499]: time="2024-10-09T01:02:13.966992088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:13.967257 containerd[1499]: time="2024-10-09T01:02:13.967070687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:13.967257 containerd[1499]: time="2024-10-09T01:02:13.967083772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:13.967537 containerd[1499]: time="2024-10-09T01:02:13.967498847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:13.993338 systemd[1]: Started cri-containerd-9b184a5ccbe60070862177ec8055aeab55b1ceee3d3be6dad6078f3336c327f0.scope - libcontainer container 9b184a5ccbe60070862177ec8055aeab55b1ceee3d3be6dad6078f3336c327f0. Oct 9 01:02:14.015223 containerd[1499]: time="2024-10-09T01:02:14.015149947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qlfgn,Uid:4110b558-7d94-4d76-bfe3-f691ccbea6c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b184a5ccbe60070862177ec8055aeab55b1ceee3d3be6dad6078f3336c327f0\"" Oct 9 01:02:14.015689 kubelet[2687]: E1009 01:02:14.015668 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:14.017761 containerd[1499]: time="2024-10-09T01:02:14.017733442Z" level=info msg="CreateContainer within sandbox \"9b184a5ccbe60070862177ec8055aeab55b1ceee3d3be6dad6078f3336c327f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:02:14.036243 containerd[1499]: time="2024-10-09T01:02:14.036198000Z" level=info msg="CreateContainer within sandbox \"9b184a5ccbe60070862177ec8055aeab55b1ceee3d3be6dad6078f3336c327f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bed8dd97e9a163ff58608ac8d29f8f29580a75070777bd0f717e72c653535ce7\"" Oct 9 01:02:14.036711 containerd[1499]: time="2024-10-09T01:02:14.036677217Z" level=info msg="StartContainer for \"bed8dd97e9a163ff58608ac8d29f8f29580a75070777bd0f717e72c653535ce7\"" Oct 9 01:02:14.067358 systemd[1]: Started cri-containerd-bed8dd97e9a163ff58608ac8d29f8f29580a75070777bd0f717e72c653535ce7.scope - libcontainer container bed8dd97e9a163ff58608ac8d29f8f29580a75070777bd0f717e72c653535ce7. Oct 9 01:02:14.096652 containerd[1499]: time="2024-10-09T01:02:14.096616338Z" level=info msg="StartContainer for \"bed8dd97e9a163ff58608ac8d29f8f29580a75070777bd0f717e72c653535ce7\" returns successfully" Oct 9 01:02:14.136739 containerd[1499]: time="2024-10-09T01:02:14.136077602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-hsmbl,Uid:4e9fe5ce-9599-4d0f-8eb5-fc6860e9ecfb,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:02:14.170053 containerd[1499]: time="2024-10-09T01:02:14.169974826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:14.170053 containerd[1499]: time="2024-10-09T01:02:14.170034829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:14.170053 containerd[1499]: time="2024-10-09T01:02:14.170047674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:14.170289 containerd[1499]: time="2024-10-09T01:02:14.170134258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:14.197340 systemd[1]: Started cri-containerd-9684eb6f70072bd4475b9aaf5624e1bd40efe83d02e000ada1bc493cb38fbaa2.scope - libcontainer container 9684eb6f70072bd4475b9aaf5624e1bd40efe83d02e000ada1bc493cb38fbaa2. Oct 9 01:02:14.235076 containerd[1499]: time="2024-10-09T01:02:14.235037095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-hsmbl,Uid:4e9fe5ce-9599-4d0f-8eb5-fc6860e9ecfb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9684eb6f70072bd4475b9aaf5624e1bd40efe83d02e000ada1bc493cb38fbaa2\"" Oct 9 01:02:14.236484 containerd[1499]: time="2024-10-09T01:02:14.236465707Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:02:14.511541 kubelet[2687]: E1009 01:02:14.511419 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:14.518797 kubelet[2687]: I1009 01:02:14.518739 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qlfgn" podStartSLOduration=1.5187022350000001 podStartE2EDuration="1.518702235s" podCreationTimestamp="2024-10-09 01:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:02:14.518601926 +0000 UTC m=+16.122493324" watchObservedRunningTime="2024-10-09 01:02:14.518702235 +0000 UTC m=+16.122593623" Oct 9 01:02:16.314767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4248363069.mount: Deactivated successfully. Oct 9 01:02:16.816647 containerd[1499]: time="2024-10-09T01:02:16.816597492Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:16.817684 containerd[1499]: time="2024-10-09T01:02:16.817652295Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136513" Oct 9 01:02:16.819064 containerd[1499]: time="2024-10-09T01:02:16.819013997Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:16.821247 containerd[1499]: time="2024-10-09T01:02:16.821216489Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:16.824152 containerd[1499]: time="2024-10-09T01:02:16.822640258Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.586149945s" Oct 9 01:02:16.824152 containerd[1499]: time="2024-10-09T01:02:16.822674563Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 01:02:16.825886 containerd[1499]: time="2024-10-09T01:02:16.825865542Z" level=info msg="CreateContainer within sandbox \"9684eb6f70072bd4475b9aaf5624e1bd40efe83d02e000ada1bc493cb38fbaa2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:02:16.840798 containerd[1499]: time="2024-10-09T01:02:16.840767664Z" level=info msg="CreateContainer within sandbox \"9684eb6f70072bd4475b9aaf5624e1bd40efe83d02e000ada1bc493cb38fbaa2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"42da1e45b673bc4cf7aa8a589a14bce6d37dc7ca33616ffd783e903297a94373\"" Oct 9 01:02:16.841300 containerd[1499]: time="2024-10-09T01:02:16.841278669Z" level=info msg="StartContainer for \"42da1e45b673bc4cf7aa8a589a14bce6d37dc7ca33616ffd783e903297a94373\"" Oct 9 01:02:16.873393 systemd[1]: Started cri-containerd-42da1e45b673bc4cf7aa8a589a14bce6d37dc7ca33616ffd783e903297a94373.scope - libcontainer container 42da1e45b673bc4cf7aa8a589a14bce6d37dc7ca33616ffd783e903297a94373. Oct 9 01:02:16.900502 containerd[1499]: time="2024-10-09T01:02:16.900461805Z" level=info msg="StartContainer for \"42da1e45b673bc4cf7aa8a589a14bce6d37dc7ca33616ffd783e903297a94373\" returns successfully" Oct 9 01:02:17.524001 kubelet[2687]: I1009 01:02:17.523701 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-hsmbl" podStartSLOduration=1.9349402470000001 podStartE2EDuration="4.523684023s" podCreationTimestamp="2024-10-09 01:02:13 +0000 UTC" firstStartedPulling="2024-10-09 01:02:14.236104764 +0000 UTC m=+15.839996152" lastFinishedPulling="2024-10-09 01:02:16.82484854 +0000 UTC m=+18.428739928" observedRunningTime="2024-10-09 01:02:17.523665499 +0000 UTC m=+19.127556887" watchObservedRunningTime="2024-10-09 01:02:17.523684023 +0000 UTC m=+19.127575411" Oct 9 01:02:19.748689 kubelet[2687]: I1009 01:02:19.747569 2687 topology_manager.go:215] "Topology Admit Handler" podUID="afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b" podNamespace="calico-system" podName="calico-typha-686c6b6bfd-hp5gj" Oct 9 01:02:19.755568 systemd[1]: Created slice kubepods-besteffort-podafe9a1bf_c4ce_41ab_9f3e_1d89eaf20b0b.slice - libcontainer container kubepods-besteffort-podafe9a1bf_c4ce_41ab_9f3e_1d89eaf20b0b.slice. Oct 9 01:02:19.793397 kubelet[2687]: I1009 01:02:19.793348 2687 topology_manager.go:215] "Topology Admit Handler" podUID="dd6b227f-b57f-486d-9e9d-474d464236c5" podNamespace="calico-system" podName="calico-node-z9rr8" Oct 9 01:02:19.803946 systemd[1]: Created slice kubepods-besteffort-poddd6b227f_b57f_486d_9e9d_474d464236c5.slice - libcontainer container kubepods-besteffort-poddd6b227f_b57f_486d_9e9d_474d464236c5.slice. Oct 9 01:02:19.900388 kubelet[2687]: I1009 01:02:19.900337 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-policysync\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900388 kubelet[2687]: I1009 01:02:19.900388 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dd6b227f-b57f-486d-9e9d-474d464236c5-node-certs\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900604 kubelet[2687]: I1009 01:02:19.900408 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-tigera-ca-bundle\") pod \"calico-typha-686c6b6bfd-hp5gj\" (UID: \"afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b\") " pod="calico-system/calico-typha-686c6b6bfd-hp5gj" Oct 9 01:02:19.900604 kubelet[2687]: I1009 01:02:19.900433 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz4mv\" (UniqueName: \"kubernetes.io/projected/dd6b227f-b57f-486d-9e9d-474d464236c5-kube-api-access-lz4mv\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900604 kubelet[2687]: I1009 01:02:19.900475 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-log-dir\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900604 kubelet[2687]: I1009 01:02:19.900495 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-var-lib-calico\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900604 kubelet[2687]: I1009 01:02:19.900512 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-bin-dir\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900781 kubelet[2687]: I1009 01:02:19.900529 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-flexvol-driver-host\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900781 kubelet[2687]: I1009 01:02:19.900554 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-var-run-calico\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900781 kubelet[2687]: I1009 01:02:19.900573 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-lib-modules\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900781 kubelet[2687]: I1009 01:02:19.900590 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-xtables-lock\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900781 kubelet[2687]: I1009 01:02:19.900610 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-typha-certs\") pod \"calico-typha-686c6b6bfd-hp5gj\" (UID: \"afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b\") " pod="calico-system/calico-typha-686c6b6bfd-hp5gj" Oct 9 01:02:19.900940 kubelet[2687]: I1009 01:02:19.900630 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzxz8\" (UniqueName: \"kubernetes.io/projected/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-kube-api-access-hzxz8\") pod \"calico-typha-686c6b6bfd-hp5gj\" (UID: \"afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b\") " pod="calico-system/calico-typha-686c6b6bfd-hp5gj" Oct 9 01:02:19.900940 kubelet[2687]: I1009 01:02:19.900651 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd6b227f-b57f-486d-9e9d-474d464236c5-tigera-ca-bundle\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.900940 kubelet[2687]: I1009 01:02:19.900670 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-net-dir\") pod \"calico-node-z9rr8\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " pod="calico-system/calico-node-z9rr8" Oct 9 01:02:19.914091 kubelet[2687]: I1009 01:02:19.914042 2687 topology_manager.go:215] "Topology Admit Handler" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" podNamespace="calico-system" podName="csi-node-driver-4xz24" Oct 9 01:02:19.914421 kubelet[2687]: E1009 01:02:19.914391 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:20.001745 kubelet[2687]: I1009 01:02:20.001592 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a-socket-dir\") pod \"csi-node-driver-4xz24\" (UID: \"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a\") " pod="calico-system/csi-node-driver-4xz24" Oct 9 01:02:20.001745 kubelet[2687]: I1009 01:02:20.001654 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a-registration-dir\") pod \"csi-node-driver-4xz24\" (UID: \"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a\") " pod="calico-system/csi-node-driver-4xz24" Oct 9 01:02:20.001745 kubelet[2687]: I1009 01:02:20.001712 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a-kubelet-dir\") pod \"csi-node-driver-4xz24\" (UID: \"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a\") " pod="calico-system/csi-node-driver-4xz24" Oct 9 01:02:20.005392 kubelet[2687]: I1009 01:02:20.001794 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a-varrun\") pod \"csi-node-driver-4xz24\" (UID: \"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a\") " pod="calico-system/csi-node-driver-4xz24" Oct 9 01:02:20.005392 kubelet[2687]: I1009 01:02:20.001874 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzngd\" (UniqueName: \"kubernetes.io/projected/a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a-kube-api-access-nzngd\") pod \"csi-node-driver-4xz24\" (UID: \"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a\") " pod="calico-system/csi-node-driver-4xz24" Oct 9 01:02:20.005767 kubelet[2687]: E1009 01:02:20.005745 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.005857 kubelet[2687]: W1009 01:02:20.005842 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.005957 kubelet[2687]: E1009 01:02:20.005941 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.007517 kubelet[2687]: E1009 01:02:20.007482 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.007664 kubelet[2687]: W1009 01:02:20.007630 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.009251 kubelet[2687]: E1009 01:02:20.009234 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.010176 kubelet[2687]: E1009 01:02:20.010161 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.010266 kubelet[2687]: W1009 01:02:20.010252 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.011237 kubelet[2687]: E1009 01:02:20.011216 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.011819 kubelet[2687]: E1009 01:02:20.011499 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.011896 kubelet[2687]: W1009 01:02:20.011883 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.011963 kubelet[2687]: E1009 01:02:20.011951 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.013153 kubelet[2687]: E1009 01:02:20.013126 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.013248 kubelet[2687]: W1009 01:02:20.013232 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.013338 kubelet[2687]: E1009 01:02:20.013324 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.014122 kubelet[2687]: E1009 01:02:20.014089 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.014317 kubelet[2687]: W1009 01:02:20.014232 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.014921 kubelet[2687]: E1009 01:02:20.014894 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.015194 kubelet[2687]: E1009 01:02:20.015161 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.015250 kubelet[2687]: W1009 01:02:20.015177 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.015250 kubelet[2687]: E1009 01:02:20.015235 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.015574 kubelet[2687]: E1009 01:02:20.015556 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.015574 kubelet[2687]: W1009 01:02:20.015571 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.015653 kubelet[2687]: E1009 01:02:20.015593 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.015958 kubelet[2687]: E1009 01:02:20.015924 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.016010 kubelet[2687]: W1009 01:02:20.015964 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.016010 kubelet[2687]: E1009 01:02:20.016061 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.018820 kubelet[2687]: E1009 01:02:20.016337 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.018820 kubelet[2687]: W1009 01:02:20.016350 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.018820 kubelet[2687]: E1009 01:02:20.016400 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.018820 kubelet[2687]: E1009 01:02:20.016683 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.018820 kubelet[2687]: W1009 01:02:20.016693 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.018820 kubelet[2687]: E1009 01:02:20.016730 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.018820 kubelet[2687]: E1009 01:02:20.018226 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.018820 kubelet[2687]: W1009 01:02:20.018236 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.018820 kubelet[2687]: E1009 01:02:20.018248 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.019436 kubelet[2687]: E1009 01:02:20.019025 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.019436 kubelet[2687]: W1009 01:02:20.019037 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.019436 kubelet[2687]: E1009 01:02:20.019250 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.060897 kubelet[2687]: E1009 01:02:20.060867 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:20.062498 containerd[1499]: time="2024-10-09T01:02:20.062442990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-686c6b6bfd-hp5gj,Uid:afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b,Namespace:calico-system,Attempt:0,}" Oct 9 01:02:20.090686 containerd[1499]: time="2024-10-09T01:02:20.090394866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:20.090686 containerd[1499]: time="2024-10-09T01:02:20.090456652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:20.090686 containerd[1499]: time="2024-10-09T01:02:20.090468885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:20.090686 containerd[1499]: time="2024-10-09T01:02:20.090547303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:20.102487 kubelet[2687]: E1009 01:02:20.102454 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.102487 kubelet[2687]: W1009 01:02:20.102481 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.102671 kubelet[2687]: E1009 01:02:20.102504 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.103112 kubelet[2687]: E1009 01:02:20.103090 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.103112 kubelet[2687]: W1009 01:02:20.103103 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.103248 kubelet[2687]: E1009 01:02:20.103120 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.103826 kubelet[2687]: E1009 01:02:20.103795 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.103826 kubelet[2687]: W1009 01:02:20.103811 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.104129 kubelet[2687]: E1009 01:02:20.103837 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.104385 kubelet[2687]: E1009 01:02:20.104364 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.104385 kubelet[2687]: W1009 01:02:20.104378 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.104496 kubelet[2687]: E1009 01:02:20.104428 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.104824 kubelet[2687]: E1009 01:02:20.104798 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.105489 kubelet[2687]: W1009 01:02:20.105262 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.105489 kubelet[2687]: E1009 01:02:20.105352 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.106072 kubelet[2687]: E1009 01:02:20.105959 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.106072 kubelet[2687]: W1009 01:02:20.105972 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.106072 kubelet[2687]: E1009 01:02:20.106040 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.107756 kubelet[2687]: E1009 01:02:20.106566 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.107756 kubelet[2687]: W1009 01:02:20.106579 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.107756 kubelet[2687]: E1009 01:02:20.106734 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:20.107756 kubelet[2687]: E1009 01:02:20.107666 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.107996 containerd[1499]: time="2024-10-09T01:02:20.107954563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z9rr8,Uid:dd6b227f-b57f-486d-9e9d-474d464236c5,Namespace:calico-system,Attempt:0,}" Oct 9 01:02:20.109392 kubelet[2687]: E1009 01:02:20.109375 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.109392 kubelet[2687]: W1009 01:02:20.109390 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.109560 kubelet[2687]: E1009 01:02:20.109468 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.109690 kubelet[2687]: E1009 01:02:20.109653 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.109690 kubelet[2687]: W1009 01:02:20.109678 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.110394 kubelet[2687]: E1009 01:02:20.109969 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.110645 kubelet[2687]: E1009 01:02:20.110624 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.110645 kubelet[2687]: W1009 01:02:20.110640 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.110827 kubelet[2687]: E1009 01:02:20.110719 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.110900 kubelet[2687]: E1009 01:02:20.110880 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.110900 kubelet[2687]: W1009 01:02:20.110892 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.111016 kubelet[2687]: E1009 01:02:20.110981 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.111565 kubelet[2687]: E1009 01:02:20.111547 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.111565 kubelet[2687]: W1009 01:02:20.111559 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.111666 kubelet[2687]: E1009 01:02:20.111600 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.113544 kubelet[2687]: E1009 01:02:20.113327 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.113544 kubelet[2687]: W1009 01:02:20.113343 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.113544 kubelet[2687]: E1009 01:02:20.113395 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.114048 kubelet[2687]: E1009 01:02:20.114033 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.114048 kubelet[2687]: W1009 01:02:20.114045 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.114125 kubelet[2687]: E1009 01:02:20.114098 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.114283 kubelet[2687]: E1009 01:02:20.114256 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.114283 kubelet[2687]: W1009 01:02:20.114269 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.114376 kubelet[2687]: E1009 01:02:20.114356 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.114364 systemd[1]: Started cri-containerd-ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56.scope - libcontainer container ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56. Oct 9 01:02:20.114856 kubelet[2687]: E1009 01:02:20.114810 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.114856 kubelet[2687]: W1009 01:02:20.114820 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.114856 kubelet[2687]: E1009 01:02:20.114840 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.115142 kubelet[2687]: E1009 01:02:20.115124 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.115142 kubelet[2687]: W1009 01:02:20.115136 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.115253 kubelet[2687]: E1009 01:02:20.115201 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.115396 kubelet[2687]: E1009 01:02:20.115372 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.115396 kubelet[2687]: W1009 01:02:20.115389 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.115481 kubelet[2687]: E1009 01:02:20.115440 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.115639 kubelet[2687]: E1009 01:02:20.115621 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.115639 kubelet[2687]: W1009 01:02:20.115633 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.115736 kubelet[2687]: E1009 01:02:20.115679 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.115946 kubelet[2687]: E1009 01:02:20.115902 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.115946 kubelet[2687]: W1009 01:02:20.115914 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.115946 kubelet[2687]: E1009 01:02:20.115926 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.116591 kubelet[2687]: E1009 01:02:20.116108 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.116591 kubelet[2687]: W1009 01:02:20.116120 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.116591 kubelet[2687]: E1009 01:02:20.116135 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.116591 kubelet[2687]: E1009 01:02:20.116450 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.116591 kubelet[2687]: W1009 01:02:20.116458 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.116591 kubelet[2687]: E1009 01:02:20.116504 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.118406 kubelet[2687]: E1009 01:02:20.118386 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.118406 kubelet[2687]: W1009 01:02:20.118398 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.118497 kubelet[2687]: E1009 01:02:20.118469 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.118710 kubelet[2687]: E1009 01:02:20.118690 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.118710 kubelet[2687]: W1009 01:02:20.118703 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.118710 kubelet[2687]: E1009 01:02:20.118711 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.119165 kubelet[2687]: E1009 01:02:20.119145 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.119165 kubelet[2687]: W1009 01:02:20.119158 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.119165 kubelet[2687]: E1009 01:02:20.119168 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.128557 kubelet[2687]: E1009 01:02:20.128526 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:20.128769 kubelet[2687]: W1009 01:02:20.128688 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:20.128769 kubelet[2687]: E1009 01:02:20.128714 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:20.140137 containerd[1499]: time="2024-10-09T01:02:20.140022871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:20.140137 containerd[1499]: time="2024-10-09T01:02:20.140113252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:20.140427 containerd[1499]: time="2024-10-09T01:02:20.140128670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:20.140543 containerd[1499]: time="2024-10-09T01:02:20.140424869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:20.167405 systemd[1]: Started cri-containerd-7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca.scope - libcontainer container 7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca. Oct 9 01:02:20.168284 containerd[1499]: time="2024-10-09T01:02:20.168195663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-686c6b6bfd-hp5gj,Uid:afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\"" Oct 9 01:02:20.173367 kubelet[2687]: E1009 01:02:20.173294 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:20.179242 containerd[1499]: time="2024-10-09T01:02:20.179155713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:02:20.195921 containerd[1499]: time="2024-10-09T01:02:20.195876307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z9rr8,Uid:dd6b227f-b57f-486d-9e9d-474d464236c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\"" Oct 9 01:02:20.213092 kubelet[2687]: E1009 01:02:20.196761 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:21.472400 kubelet[2687]: E1009 01:02:21.472353 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:23.634749 kubelet[2687]: E1009 01:02:23.634681 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:23.635979 containerd[1499]: time="2024-10-09T01:02:23.635923423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:23.637596 containerd[1499]: time="2024-10-09T01:02:23.637530611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 01:02:23.639025 containerd[1499]: time="2024-10-09T01:02:23.638966557Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:23.641581 containerd[1499]: time="2024-10-09T01:02:23.641532170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:23.642096 containerd[1499]: time="2024-10-09T01:02:23.642049776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.462839501s" Oct 9 01:02:23.642096 containerd[1499]: time="2024-10-09T01:02:23.642081565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 01:02:23.644122 containerd[1499]: time="2024-10-09T01:02:23.643926211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:02:23.653308 containerd[1499]: time="2024-10-09T01:02:23.653263402Z" level=info msg="CreateContainer within sandbox \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:02:23.670516 containerd[1499]: time="2024-10-09T01:02:23.670471121Z" level=info msg="CreateContainer within sandbox \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\"" Oct 9 01:02:23.672224 containerd[1499]: time="2024-10-09T01:02:23.670892395Z" level=info msg="StartContainer for \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\"" Oct 9 01:02:23.698352 systemd[1]: Started cri-containerd-125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671.scope - libcontainer container 125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671. Oct 9 01:02:23.804650 containerd[1499]: time="2024-10-09T01:02:23.804573691Z" level=info msg="StartContainer for \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\" returns successfully" Oct 9 01:02:24.637452 containerd[1499]: time="2024-10-09T01:02:24.637402620Z" level=info msg="StopContainer for \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\" with timeout 300 (s)" Oct 9 01:02:24.637953 containerd[1499]: time="2024-10-09T01:02:24.637787985Z" level=info msg="Stop container \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\" with signal terminated" Oct 9 01:02:24.650459 systemd[1]: cri-containerd-125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671.scope: Deactivated successfully. Oct 9 01:02:24.652498 kubelet[2687]: I1009 01:02:24.652419 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-686c6b6bfd-hp5gj" podStartSLOduration=2.183566456 podStartE2EDuration="5.652378626s" podCreationTimestamp="2024-10-09 01:02:19 +0000 UTC" firstStartedPulling="2024-10-09 01:02:20.174993533 +0000 UTC m=+21.778884921" lastFinishedPulling="2024-10-09 01:02:23.643805703 +0000 UTC m=+25.247697091" observedRunningTime="2024-10-09 01:02:24.652098829 +0000 UTC m=+26.255990217" watchObservedRunningTime="2024-10-09 01:02:24.652378626 +0000 UTC m=+26.256270004" Oct 9 01:02:24.673111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671-rootfs.mount: Deactivated successfully. Oct 9 01:02:24.677931 containerd[1499]: time="2024-10-09T01:02:24.677858015Z" level=info msg="shim disconnected" id=125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671 namespace=k8s.io Oct 9 01:02:24.677931 containerd[1499]: time="2024-10-09T01:02:24.677920262Z" level=warning msg="cleaning up after shim disconnected" id=125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671 namespace=k8s.io Oct 9 01:02:24.677931 containerd[1499]: time="2024-10-09T01:02:24.677928086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:02:24.695736 containerd[1499]: time="2024-10-09T01:02:24.695690001Z" level=info msg="StopContainer for \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\" returns successfully" Oct 9 01:02:24.696445 containerd[1499]: time="2024-10-09T01:02:24.696417020Z" level=info msg="StopPodSandbox for \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\"" Oct 9 01:02:24.696485 containerd[1499]: time="2024-10-09T01:02:24.696456173Z" level=info msg="Container to stop \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:02:24.698584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56-shm.mount: Deactivated successfully. Oct 9 01:02:24.703287 systemd[1]: cri-containerd-ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56.scope: Deactivated successfully. Oct 9 01:02:24.727253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56-rootfs.mount: Deactivated successfully. Oct 9 01:02:24.818007 containerd[1499]: time="2024-10-09T01:02:24.817868847Z" level=info msg="shim disconnected" id=ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56 namespace=k8s.io Oct 9 01:02:24.818007 containerd[1499]: time="2024-10-09T01:02:24.817928240Z" level=warning msg="cleaning up after shim disconnected" id=ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56 namespace=k8s.io Oct 9 01:02:24.818007 containerd[1499]: time="2024-10-09T01:02:24.817938379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:02:24.831951 containerd[1499]: time="2024-10-09T01:02:24.831911316Z" level=info msg="TearDown network for sandbox \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\" successfully" Oct 9 01:02:24.831951 containerd[1499]: time="2024-10-09T01:02:24.831942235Z" level=info msg="StopPodSandbox for \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\" returns successfully" Oct 9 01:02:24.945781 kubelet[2687]: E1009 01:02:24.945662 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:24.945781 kubelet[2687]: W1009 01:02:24.945690 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:24.945781 kubelet[2687]: E1009 01:02:24.945710 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:24.945781 kubelet[2687]: I1009 01:02:24.945758 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzxz8\" (UniqueName: \"kubernetes.io/projected/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-kube-api-access-hzxz8\") pod \"afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b\" (UID: \"afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b\") " Oct 9 01:02:24.945998 kubelet[2687]: E1009 01:02:24.945970 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:24.945998 kubelet[2687]: W1009 01:02:24.945984 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:24.946084 kubelet[2687]: E1009 01:02:24.945999 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:24.946084 kubelet[2687]: I1009 01:02:24.946016 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-typha-certs\") pod \"afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b\" (UID: \"afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b\") " Oct 9 01:02:24.946237 kubelet[2687]: E1009 01:02:24.946220 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:24.946237 kubelet[2687]: W1009 01:02:24.946235 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:24.946317 kubelet[2687]: E1009 01:02:24.946249 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:24.946317 kubelet[2687]: I1009 01:02:24.946268 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-tigera-ca-bundle\") pod \"afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b\" (UID: \"afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b\") " Oct 9 01:02:24.946551 kubelet[2687]: E1009 01:02:24.946536 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:24.946607 kubelet[2687]: W1009 01:02:24.946552 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:24.946607 kubelet[2687]: E1009 01:02:24.946572 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:24.947121 kubelet[2687]: E1009 01:02:24.947092 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:24.947121 kubelet[2687]: W1009 01:02:24.947108 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:24.947244 kubelet[2687]: E1009 01:02:24.947123 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:24.950592 kubelet[2687]: I1009 01:02:24.950291 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b" (UID: "afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 01:02:24.951374 kubelet[2687]: I1009 01:02:24.951345 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-kube-api-access-hzxz8" (OuterVolumeSpecName: "kube-api-access-hzxz8") pod "afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b" (UID: "afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b"). InnerVolumeSpecName "kube-api-access-hzxz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:02:24.951795 systemd[1]: var-lib-kubelet-pods-afe9a1bf\x2dc4ce\x2d41ab\x2d9f3e\x2d1d89eaf20b0b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhzxz8.mount: Deactivated successfully. Oct 9 01:02:24.951869 kubelet[2687]: E1009 01:02:24.951804 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:24.951869 kubelet[2687]: W1009 01:02:24.951818 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:24.951869 kubelet[2687]: E1009 01:02:24.951834 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:24.951935 systemd[1]: var-lib-kubelet-pods-afe9a1bf\x2dc4ce\x2d41ab\x2d9f3e\x2d1d89eaf20b0b-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Oct 9 01:02:24.952384 kubelet[2687]: I1009 01:02:24.952340 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b" (UID: "afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:02:25.046682 kubelet[2687]: I1009 01:02:25.046647 2687 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.046682 kubelet[2687]: I1009 01:02:25.046674 2687 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hzxz8\" (UniqueName: \"kubernetes.io/projected/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-kube-api-access-hzxz8\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.046682 kubelet[2687]: I1009 01:02:25.046686 2687 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b-typha-certs\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.049896 kubelet[2687]: I1009 01:02:25.049860 2687 topology_manager.go:215] "Topology Admit Handler" podUID="1fd8ff70-737a-4551-9b16-f439f13ee72a" podNamespace="calico-system" podName="calico-typha-674d84684d-lvhxd" Oct 9 01:02:25.049941 kubelet[2687]: E1009 01:02:25.049927 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b" containerName="calico-typha" Oct 9 01:02:25.049985 kubelet[2687]: I1009 01:02:25.049964 2687 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b" containerName="calico-typha" Oct 9 01:02:25.055133 systemd[1]: Created slice kubepods-besteffort-pod1fd8ff70_737a_4551_9b16_f439f13ee72a.slice - libcontainer container kubepods-besteffort-pod1fd8ff70_737a_4551_9b16_f439f13ee72a.slice. Oct 9 01:02:25.134493 kubelet[2687]: E1009 01:02:25.134438 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.134493 kubelet[2687]: W1009 01:02:25.134465 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.134493 kubelet[2687]: E1009 01:02:25.134488 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.135012 kubelet[2687]: E1009 01:02:25.134996 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.135012 kubelet[2687]: W1009 01:02:25.135010 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.135088 kubelet[2687]: E1009 01:02:25.135021 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.135405 kubelet[2687]: E1009 01:02:25.135370 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.135405 kubelet[2687]: W1009 01:02:25.135383 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.135405 kubelet[2687]: E1009 01:02:25.135395 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.135770 kubelet[2687]: E1009 01:02:25.135749 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.135770 kubelet[2687]: W1009 01:02:25.135762 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.135876 kubelet[2687]: E1009 01:02:25.135772 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.136244 kubelet[2687]: E1009 01:02:25.136225 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.136295 kubelet[2687]: W1009 01:02:25.136244 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.136295 kubelet[2687]: E1009 01:02:25.136265 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.136589 kubelet[2687]: E1009 01:02:25.136575 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.136589 kubelet[2687]: W1009 01:02:25.136589 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.136686 kubelet[2687]: E1009 01:02:25.136599 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.136892 kubelet[2687]: E1009 01:02:25.136879 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.136925 kubelet[2687]: W1009 01:02:25.136891 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.136925 kubelet[2687]: E1009 01:02:25.136902 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.137308 kubelet[2687]: E1009 01:02:25.137295 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.137308 kubelet[2687]: W1009 01:02:25.137307 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.137433 kubelet[2687]: E1009 01:02:25.137318 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.137606 kubelet[2687]: E1009 01:02:25.137592 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.137606 kubelet[2687]: W1009 01:02:25.137605 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.137757 kubelet[2687]: E1009 01:02:25.137616 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.138045 kubelet[2687]: E1009 01:02:25.138027 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.138045 kubelet[2687]: W1009 01:02:25.138042 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.138247 kubelet[2687]: E1009 01:02:25.138053 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.138443 kubelet[2687]: E1009 01:02:25.138312 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.138443 kubelet[2687]: W1009 01:02:25.138325 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.138443 kubelet[2687]: E1009 01:02:25.138337 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.138612 kubelet[2687]: E1009 01:02:25.138600 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.138695 kubelet[2687]: W1009 01:02:25.138676 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.138695 kubelet[2687]: E1009 01:02:25.138693 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.147385 kubelet[2687]: E1009 01:02:25.147355 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.149265 kubelet[2687]: W1009 01:02:25.149231 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.149346 kubelet[2687]: E1009 01:02:25.149271 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.149346 kubelet[2687]: I1009 01:02:25.149299 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1fd8ff70-737a-4551-9b16-f439f13ee72a-typha-certs\") pod \"calico-typha-674d84684d-lvhxd\" (UID: \"1fd8ff70-737a-4551-9b16-f439f13ee72a\") " pod="calico-system/calico-typha-674d84684d-lvhxd" Oct 9 01:02:25.149537 kubelet[2687]: E1009 01:02:25.149524 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.149537 kubelet[2687]: W1009 01:02:25.149536 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.149613 kubelet[2687]: E1009 01:02:25.149548 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.149613 kubelet[2687]: I1009 01:02:25.149570 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fd8ff70-737a-4551-9b16-f439f13ee72a-tigera-ca-bundle\") pod \"calico-typha-674d84684d-lvhxd\" (UID: \"1fd8ff70-737a-4551-9b16-f439f13ee72a\") " pod="calico-system/calico-typha-674d84684d-lvhxd" Oct 9 01:02:25.149870 kubelet[2687]: E1009 01:02:25.149855 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.149870 kubelet[2687]: W1009 01:02:25.149867 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.149942 kubelet[2687]: E1009 01:02:25.149879 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.149942 kubelet[2687]: I1009 01:02:25.149893 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4kz4\" (UniqueName: \"kubernetes.io/projected/1fd8ff70-737a-4551-9b16-f439f13ee72a-kube-api-access-c4kz4\") pod \"calico-typha-674d84684d-lvhxd\" (UID: \"1fd8ff70-737a-4551-9b16-f439f13ee72a\") " pod="calico-system/calico-typha-674d84684d-lvhxd" Oct 9 01:02:25.150143 kubelet[2687]: E1009 01:02:25.150126 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.150143 kubelet[2687]: W1009 01:02:25.150141 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.150238 kubelet[2687]: E1009 01:02:25.150156 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.150397 kubelet[2687]: E1009 01:02:25.150382 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.150397 kubelet[2687]: W1009 01:02:25.150393 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.150494 kubelet[2687]: E1009 01:02:25.150415 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.150653 kubelet[2687]: E1009 01:02:25.150638 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.150653 kubelet[2687]: W1009 01:02:25.150651 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.150729 kubelet[2687]: E1009 01:02:25.150679 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.150889 kubelet[2687]: E1009 01:02:25.150876 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.150889 kubelet[2687]: W1009 01:02:25.150886 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.150968 kubelet[2687]: E1009 01:02:25.150899 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.151121 kubelet[2687]: E1009 01:02:25.151107 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.151121 kubelet[2687]: W1009 01:02:25.151120 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.151219 kubelet[2687]: E1009 01:02:25.151132 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.151397 kubelet[2687]: E1009 01:02:25.151383 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.151397 kubelet[2687]: W1009 01:02:25.151395 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.151465 kubelet[2687]: E1009 01:02:25.151406 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.161345 containerd[1499]: time="2024-10-09T01:02:25.161299576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:25.162396 containerd[1499]: time="2024-10-09T01:02:25.162360454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 01:02:25.163522 containerd[1499]: time="2024-10-09T01:02:25.163495110Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:25.166137 containerd[1499]: time="2024-10-09T01:02:25.166069938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:25.166711 containerd[1499]: time="2024-10-09T01:02:25.166684595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.522730001s" Oct 9 01:02:25.166783 containerd[1499]: time="2024-10-09T01:02:25.166713069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 01:02:25.168413 containerd[1499]: time="2024-10-09T01:02:25.168389095Z" level=info msg="CreateContainer within sandbox \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:02:25.191542 containerd[1499]: time="2024-10-09T01:02:25.191495182Z" level=info msg="CreateContainer within sandbox \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2\"" Oct 9 01:02:25.191993 containerd[1499]: time="2024-10-09T01:02:25.191969345Z" level=info msg="StartContainer for \"ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2\"" Oct 9 01:02:25.224313 systemd[1]: Started cri-containerd-ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2.scope - libcontainer container ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2. Oct 9 01:02:25.251245 kubelet[2687]: E1009 01:02:25.251168 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.251245 kubelet[2687]: W1009 01:02:25.251237 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.251368 kubelet[2687]: E1009 01:02:25.251260 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.251545 kubelet[2687]: E1009 01:02:25.251514 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.251545 kubelet[2687]: W1009 01:02:25.251541 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.252017 kubelet[2687]: E1009 01:02:25.251558 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.252017 kubelet[2687]: E1009 01:02:25.251830 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.252017 kubelet[2687]: W1009 01:02:25.251841 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.252017 kubelet[2687]: E1009 01:02:25.251859 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.252251 kubelet[2687]: E1009 01:02:25.252234 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.252251 kubelet[2687]: W1009 01:02:25.252250 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.252334 kubelet[2687]: E1009 01:02:25.252275 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.252521 kubelet[2687]: E1009 01:02:25.252502 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.252521 kubelet[2687]: W1009 01:02:25.252513 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.252575 kubelet[2687]: E1009 01:02:25.252527 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.252745 kubelet[2687]: E1009 01:02:25.252733 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.252780 kubelet[2687]: W1009 01:02:25.252744 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.252811 kubelet[2687]: E1009 01:02:25.252787 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.253035 kubelet[2687]: E1009 01:02:25.253023 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.253035 kubelet[2687]: W1009 01:02:25.253034 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.253102 kubelet[2687]: E1009 01:02:25.253061 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.253305 kubelet[2687]: E1009 01:02:25.253292 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.253305 kubelet[2687]: W1009 01:02:25.253307 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.253417 kubelet[2687]: E1009 01:02:25.253401 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.253572 kubelet[2687]: E1009 01:02:25.253558 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.253572 kubelet[2687]: W1009 01:02:25.253571 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.253639 kubelet[2687]: E1009 01:02:25.253585 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.253841 kubelet[2687]: E1009 01:02:25.253827 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.253902 kubelet[2687]: W1009 01:02:25.253840 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.253902 kubelet[2687]: E1009 01:02:25.253852 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.254114 kubelet[2687]: E1009 01:02:25.254094 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.254114 kubelet[2687]: W1009 01:02:25.254108 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.254195 kubelet[2687]: E1009 01:02:25.254145 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.254471 kubelet[2687]: E1009 01:02:25.254458 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.254503 kubelet[2687]: W1009 01:02:25.254471 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.254541 kubelet[2687]: E1009 01:02:25.254506 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.255274 kubelet[2687]: E1009 01:02:25.255151 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.255274 kubelet[2687]: W1009 01:02:25.255165 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.255274 kubelet[2687]: E1009 01:02:25.255208 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.256093 kubelet[2687]: E1009 01:02:25.256008 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.256093 kubelet[2687]: W1009 01:02:25.256023 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.257287 kubelet[2687]: E1009 01:02:25.257271 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.257516 kubelet[2687]: E1009 01:02:25.257503 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.257568 kubelet[2687]: W1009 01:02:25.257515 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.257568 kubelet[2687]: E1009 01:02:25.257526 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.259049 containerd[1499]: time="2024-10-09T01:02:25.258958339Z" level=info msg="StartContainer for \"ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2\" returns successfully" Oct 9 01:02:25.260519 kubelet[2687]: E1009 01:02:25.260502 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.260519 kubelet[2687]: W1009 01:02:25.260517 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.260602 kubelet[2687]: E1009 01:02:25.260540 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.260761 kubelet[2687]: E1009 01:02:25.260741 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.260761 kubelet[2687]: W1009 01:02:25.260757 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.260811 kubelet[2687]: E1009 01:02:25.260767 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.264640 kubelet[2687]: E1009 01:02:25.264607 2687 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:02:25.264640 kubelet[2687]: W1009 01:02:25.264627 2687 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:02:25.264640 kubelet[2687]: E1009 01:02:25.264645 2687 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:02:25.267485 systemd[1]: cri-containerd-ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2.scope: Deactivated successfully. Oct 9 01:02:25.359666 kubelet[2687]: E1009 01:02:25.359632 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:25.360440 containerd[1499]: time="2024-10-09T01:02:25.360385529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-674d84684d-lvhxd,Uid:1fd8ff70-737a-4551-9b16-f439f13ee72a,Namespace:calico-system,Attempt:0,}" Oct 9 01:02:25.372594 containerd[1499]: time="2024-10-09T01:02:25.372510008Z" level=info msg="shim disconnected" id=ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2 namespace=k8s.io Oct 9 01:02:25.372594 containerd[1499]: time="2024-10-09T01:02:25.372563368Z" level=warning msg="cleaning up after shim disconnected" id=ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2 namespace=k8s.io Oct 9 01:02:25.372594 containerd[1499]: time="2024-10-09T01:02:25.372572575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:02:25.401514 containerd[1499]: time="2024-10-09T01:02:25.401228252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:25.401514 containerd[1499]: time="2024-10-09T01:02:25.401274178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:25.401514 containerd[1499]: time="2024-10-09T01:02:25.401288976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:25.401514 containerd[1499]: time="2024-10-09T01:02:25.401383905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:25.426323 systemd[1]: Started cri-containerd-cdbe4482a27a1117906697a157119d46a065264d4c4df6197b857643722f8107.scope - libcontainer container cdbe4482a27a1117906697a157119d46a065264d4c4df6197b857643722f8107. Oct 9 01:02:25.467662 containerd[1499]: time="2024-10-09T01:02:25.467533428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-674d84684d-lvhxd,Uid:1fd8ff70-737a-4551-9b16-f439f13ee72a,Namespace:calico-system,Attempt:0,} returns sandbox id \"cdbe4482a27a1117906697a157119d46a065264d4c4df6197b857643722f8107\"" Oct 9 01:02:25.468458 kubelet[2687]: E1009 01:02:25.468424 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:25.472824 kubelet[2687]: E1009 01:02:25.472755 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:25.476898 containerd[1499]: time="2024-10-09T01:02:25.476783676Z" level=info msg="CreateContainer within sandbox \"cdbe4482a27a1117906697a157119d46a065264d4c4df6197b857643722f8107\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:02:25.494707 containerd[1499]: time="2024-10-09T01:02:25.494635940Z" level=info msg="CreateContainer within sandbox \"cdbe4482a27a1117906697a157119d46a065264d4c4df6197b857643722f8107\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0e0bd61f414a8764359d4620c6e56366dea211515079959ee8b74f93e2326b89\"" Oct 9 01:02:25.495281 containerd[1499]: time="2024-10-09T01:02:25.495230860Z" level=info msg="StartContainer for \"0e0bd61f414a8764359d4620c6e56366dea211515079959ee8b74f93e2326b89\"" Oct 9 01:02:25.524349 systemd[1]: Started cri-containerd-0e0bd61f414a8764359d4620c6e56366dea211515079959ee8b74f93e2326b89.scope - libcontainer container 0e0bd61f414a8764359d4620c6e56366dea211515079959ee8b74f93e2326b89. Oct 9 01:02:25.566982 containerd[1499]: time="2024-10-09T01:02:25.566936827Z" level=info msg="StartContainer for \"0e0bd61f414a8764359d4620c6e56366dea211515079959ee8b74f93e2326b89\" returns successfully" Oct 9 01:02:25.640393 containerd[1499]: time="2024-10-09T01:02:25.640346671Z" level=info msg="StopPodSandbox for \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\"" Oct 9 01:02:25.640790 containerd[1499]: time="2024-10-09T01:02:25.640392208Z" level=info msg="Container to stop \"ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:02:25.643437 kubelet[2687]: I1009 01:02:25.642586 2687 scope.go:117] "RemoveContainer" containerID="125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671" Oct 9 01:02:25.644990 containerd[1499]: time="2024-10-09T01:02:25.644964196Z" level=info msg="RemoveContainer for \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\"" Oct 9 01:02:25.654709 kubelet[2687]: E1009 01:02:25.654685 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:25.660772 containerd[1499]: time="2024-10-09T01:02:25.657946581Z" level=info msg="RemoveContainer for \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\" returns successfully" Oct 9 01:02:25.660772 containerd[1499]: time="2024-10-09T01:02:25.660314199Z" level=error msg="ContainerStatus for \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\": not found" Oct 9 01:02:25.659575 systemd[1]: var-lib-kubelet-pods-afe9a1bf\x2dc4ce\x2d41ab\x2d9f3e\x2d1d89eaf20b0b-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Oct 9 01:02:25.661059 kubelet[2687]: I1009 01:02:25.659153 2687 scope.go:117] "RemoveContainer" containerID="125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671" Oct 9 01:02:25.661059 kubelet[2687]: E1009 01:02:25.660457 2687 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\": not found" containerID="125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671" Oct 9 01:02:25.661059 kubelet[2687]: I1009 01:02:25.660480 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671"} err="failed to get container status \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\": rpc error: code = NotFound desc = an error occurred when try to find container \"125de365376baa6174b75bb38b1b4c36a335af298bf66d738cd83694b8949671\": not found" Oct 9 01:02:25.659702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca-shm.mount: Deactivated successfully. Oct 9 01:02:25.661137 systemd[1]: cri-containerd-7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca.scope: Deactivated successfully. Oct 9 01:02:25.668166 systemd[1]: Removed slice kubepods-besteffort-podafe9a1bf_c4ce_41ab_9f3e_1d89eaf20b0b.slice - libcontainer container kubepods-besteffort-podafe9a1bf_c4ce_41ab_9f3e_1d89eaf20b0b.slice. Oct 9 01:02:25.674983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca-rootfs.mount: Deactivated successfully. Oct 9 01:02:25.677274 kubelet[2687]: I1009 01:02:25.677219 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-674d84684d-lvhxd" podStartSLOduration=5.677160349 podStartE2EDuration="5.677160349s" podCreationTimestamp="2024-10-09 01:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:02:25.675359167 +0000 UTC m=+27.279250555" watchObservedRunningTime="2024-10-09 01:02:25.677160349 +0000 UTC m=+27.281051737" Oct 9 01:02:25.687220 containerd[1499]: time="2024-10-09T01:02:25.687087681Z" level=info msg="shim disconnected" id=7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca namespace=k8s.io Oct 9 01:02:25.687220 containerd[1499]: time="2024-10-09T01:02:25.687147213Z" level=warning msg="cleaning up after shim disconnected" id=7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca namespace=k8s.io Oct 9 01:02:25.687220 containerd[1499]: time="2024-10-09T01:02:25.687154917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:02:25.702022 containerd[1499]: time="2024-10-09T01:02:25.701979651Z" level=info msg="TearDown network for sandbox \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\" successfully" Oct 9 01:02:25.702305 containerd[1499]: time="2024-10-09T01:02:25.702154471Z" level=info msg="StopPodSandbox for \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\" returns successfully" Oct 9 01:02:25.754980 kubelet[2687]: I1009 01:02:25.754857 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dd6b227f-b57f-486d-9e9d-474d464236c5-node-certs\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.754980 kubelet[2687]: I1009 01:02:25.754890 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-flexvol-driver-host\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.754980 kubelet[2687]: I1009 01:02:25.754908 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-var-run-calico\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.754980 kubelet[2687]: I1009 01:02:25.754925 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd6b227f-b57f-486d-9e9d-474d464236c5-tigera-ca-bundle\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.754980 kubelet[2687]: I1009 01:02:25.754946 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz4mv\" (UniqueName: \"kubernetes.io/projected/dd6b227f-b57f-486d-9e9d-474d464236c5-kube-api-access-lz4mv\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.754980 kubelet[2687]: I1009 01:02:25.754960 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-lib-modules\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.755255 kubelet[2687]: I1009 01:02:25.754978 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-net-dir\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.755255 kubelet[2687]: I1009 01:02:25.754970 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:02:25.755255 kubelet[2687]: I1009 01:02:25.754993 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-log-dir\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.755255 kubelet[2687]: I1009 01:02:25.755030 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:02:25.755255 kubelet[2687]: I1009 01:02:25.755055 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-bin-dir\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.755377 kubelet[2687]: I1009 01:02:25.755059 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:02:25.755377 kubelet[2687]: I1009 01:02:25.755074 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-xtables-lock\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.755377 kubelet[2687]: I1009 01:02:25.755089 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-policysync\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.755377 kubelet[2687]: I1009 01:02:25.755102 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-var-lib-calico\") pod \"dd6b227f-b57f-486d-9e9d-474d464236c5\" (UID: \"dd6b227f-b57f-486d-9e9d-474d464236c5\") " Oct 9 01:02:25.755377 kubelet[2687]: I1009 01:02:25.755228 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-policysync" (OuterVolumeSpecName: "policysync") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:02:25.755493 kubelet[2687]: I1009 01:02:25.755249 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:02:25.755493 kubelet[2687]: I1009 01:02:25.755269 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:02:25.755493 kubelet[2687]: I1009 01:02:25.755313 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:02:25.755578 kubelet[2687]: I1009 01:02:25.755546 2687 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.755578 kubelet[2687]: I1009 01:02:25.755558 2687 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.755578 kubelet[2687]: I1009 01:02:25.755568 2687 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-var-run-calico\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.755650 kubelet[2687]: I1009 01:02:25.755584 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6b227f-b57f-486d-9e9d-474d464236c5-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:02:25.755650 kubelet[2687]: I1009 01:02:25.755612 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:02:25.755834 kubelet[2687]: I1009 01:02:25.755815 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:02:25.758392 kubelet[2687]: I1009 01:02:25.758321 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd6b227f-b57f-486d-9e9d-474d464236c5-node-certs" (OuterVolumeSpecName: "node-certs") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 01:02:25.758797 kubelet[2687]: I1009 01:02:25.758762 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6b227f-b57f-486d-9e9d-474d464236c5-kube-api-access-lz4mv" (OuterVolumeSpecName: "kube-api-access-lz4mv") pod "dd6b227f-b57f-486d-9e9d-474d464236c5" (UID: "dd6b227f-b57f-486d-9e9d-474d464236c5"). InnerVolumeSpecName "kube-api-access-lz4mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:02:25.759684 systemd[1]: var-lib-kubelet-pods-dd6b227f\x2db57f\x2d486d\x2d9e9d\x2d474d464236c5-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Oct 9 01:02:25.761874 systemd[1]: var-lib-kubelet-pods-dd6b227f\x2db57f\x2d486d\x2d9e9d\x2d474d464236c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlz4mv.mount: Deactivated successfully. Oct 9 01:02:25.856740 kubelet[2687]: I1009 01:02:25.856676 2687 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.856740 kubelet[2687]: I1009 01:02:25.856714 2687 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.856740 kubelet[2687]: I1009 01:02:25.856723 2687 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.856740 kubelet[2687]: I1009 01:02:25.856732 2687 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-policysync\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.856740 kubelet[2687]: I1009 01:02:25.856741 2687 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.856740 kubelet[2687]: I1009 01:02:25.856750 2687 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dd6b227f-b57f-486d-9e9d-474d464236c5-node-certs\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.856740 kubelet[2687]: I1009 01:02:25.856759 2687 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd6b227f-b57f-486d-9e9d-474d464236c5-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.857041 kubelet[2687]: I1009 01:02:25.856768 2687 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd6b227f-b57f-486d-9e9d-474d464236c5-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:25.857041 kubelet[2687]: I1009 01:02:25.856777 2687 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lz4mv\" (UniqueName: \"kubernetes.io/projected/dd6b227f-b57f-486d-9e9d-474d464236c5-kube-api-access-lz4mv\") on node \"localhost\" DevicePath \"\"" Oct 9 01:02:26.092041 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:57936.service - OpenSSH per-connection server daemon (10.0.0.1:57936). Oct 9 01:02:26.129322 sshd[3583]: Accepted publickey for core from 10.0.0.1 port 57936 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:02:26.130925 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:26.134886 systemd-logind[1477]: New session 10 of user core. Oct 9 01:02:26.142448 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:02:26.258414 sshd[3583]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:26.263062 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:57936.service: Deactivated successfully. Oct 9 01:02:26.265274 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:02:26.265891 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:02:26.266782 systemd-logind[1477]: Removed session 10. Oct 9 01:02:26.475271 kubelet[2687]: I1009 01:02:26.475120 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b" path="/var/lib/kubelet/pods/afe9a1bf-c4ce-41ab-9f3e-1d89eaf20b0b/volumes" Oct 9 01:02:26.480616 systemd[1]: Removed slice kubepods-besteffort-poddd6b227f_b57f_486d_9e9d_474d464236c5.slice - libcontainer container kubepods-besteffort-poddd6b227f_b57f_486d_9e9d_474d464236c5.slice. Oct 9 01:02:26.650645 kubelet[2687]: I1009 01:02:26.650617 2687 scope.go:117] "RemoveContainer" containerID="ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2" Oct 9 01:02:26.652331 containerd[1499]: time="2024-10-09T01:02:26.652297438Z" level=info msg="RemoveContainer for \"ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2\"" Oct 9 01:02:26.656440 containerd[1499]: time="2024-10-09T01:02:26.656416823Z" level=info msg="RemoveContainer for \"ddb958e1e9707d45415a6a6781b1dec9387883c8c6d6001d6bc06426ae4feac2\" returns successfully" Oct 9 01:02:26.678616 kubelet[2687]: I1009 01:02:26.678502 2687 topology_manager.go:215] "Topology Admit Handler" podUID="f6cab2e2-2bd0-45f7-9699-25693475f5c6" podNamespace="calico-system" podName="calico-node-6mjsz" Oct 9 01:02:26.678616 kubelet[2687]: E1009 01:02:26.678590 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd6b227f-b57f-486d-9e9d-474d464236c5" containerName="flexvol-driver" Oct 9 01:02:26.679569 kubelet[2687]: I1009 01:02:26.679137 2687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6b227f-b57f-486d-9e9d-474d464236c5" containerName="flexvol-driver" Oct 9 01:02:26.690788 systemd[1]: Created slice kubepods-besteffort-podf6cab2e2_2bd0_45f7_9699_25693475f5c6.slice - libcontainer container kubepods-besteffort-podf6cab2e2_2bd0_45f7_9699_25693475f5c6.slice. Oct 9 01:02:26.762686 kubelet[2687]: I1009 01:02:26.762550 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f6cab2e2-2bd0-45f7-9699-25693475f5c6-policysync\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.762686 kubelet[2687]: I1009 01:02:26.762600 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f6cab2e2-2bd0-45f7-9699-25693475f5c6-var-run-calico\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.762686 kubelet[2687]: I1009 01:02:26.762616 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6cab2e2-2bd0-45f7-9699-25693475f5c6-lib-modules\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.762686 kubelet[2687]: I1009 01:02:26.762634 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfkx\" (UniqueName: \"kubernetes.io/projected/f6cab2e2-2bd0-45f7-9699-25693475f5c6-kube-api-access-lnfkx\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.762686 kubelet[2687]: I1009 01:02:26.762652 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f6cab2e2-2bd0-45f7-9699-25693475f5c6-cni-net-dir\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.762933 kubelet[2687]: I1009 01:02:26.762666 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f6cab2e2-2bd0-45f7-9699-25693475f5c6-cni-log-dir\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.762933 kubelet[2687]: I1009 01:02:26.762678 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f6cab2e2-2bd0-45f7-9699-25693475f5c6-flexvol-driver-host\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.762933 kubelet[2687]: I1009 01:02:26.762692 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6cab2e2-2bd0-45f7-9699-25693475f5c6-xtables-lock\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.762933 kubelet[2687]: I1009 01:02:26.762705 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6cab2e2-2bd0-45f7-9699-25693475f5c6-tigera-ca-bundle\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.762933 kubelet[2687]: I1009 01:02:26.762718 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f6cab2e2-2bd0-45f7-9699-25693475f5c6-var-lib-calico\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.763082 kubelet[2687]: I1009 01:02:26.762737 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f6cab2e2-2bd0-45f7-9699-25693475f5c6-node-certs\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.763082 kubelet[2687]: I1009 01:02:26.762750 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f6cab2e2-2bd0-45f7-9699-25693475f5c6-cni-bin-dir\") pod \"calico-node-6mjsz\" (UID: \"f6cab2e2-2bd0-45f7-9699-25693475f5c6\") " pod="calico-system/calico-node-6mjsz" Oct 9 01:02:26.994641 kubelet[2687]: E1009 01:02:26.994598 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:26.995327 containerd[1499]: time="2024-10-09T01:02:26.994965736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6mjsz,Uid:f6cab2e2-2bd0-45f7-9699-25693475f5c6,Namespace:calico-system,Attempt:0,}" Oct 9 01:02:27.015830 containerd[1499]: time="2024-10-09T01:02:27.015685903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:27.015830 containerd[1499]: time="2024-10-09T01:02:27.015741738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:27.015830 containerd[1499]: time="2024-10-09T01:02:27.015755324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:27.015981 containerd[1499]: time="2024-10-09T01:02:27.015839381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:27.035372 systemd[1]: Started cri-containerd-a7180e7fe6ca0c530b1aafdbbd5891db324a94c0c0665076ad80e98e93a502e0.scope - libcontainer container a7180e7fe6ca0c530b1aafdbbd5891db324a94c0c0665076ad80e98e93a502e0. Oct 9 01:02:27.056971 containerd[1499]: time="2024-10-09T01:02:27.056918211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6mjsz,Uid:f6cab2e2-2bd0-45f7-9699-25693475f5c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7180e7fe6ca0c530b1aafdbbd5891db324a94c0c0665076ad80e98e93a502e0\"" Oct 9 01:02:27.057646 kubelet[2687]: E1009 01:02:27.057619 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:27.059565 containerd[1499]: time="2024-10-09T01:02:27.059541678Z" level=info msg="CreateContainer within sandbox \"a7180e7fe6ca0c530b1aafdbbd5891db324a94c0c0665076ad80e98e93a502e0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:02:27.076080 containerd[1499]: time="2024-10-09T01:02:27.076042552Z" level=info msg="CreateContainer within sandbox \"a7180e7fe6ca0c530b1aafdbbd5891db324a94c0c0665076ad80e98e93a502e0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eadb9c7874950d86e87c1963b2c081926db5b994f11c6c7116c793e71283c181\"" Oct 9 01:02:27.076466 containerd[1499]: time="2024-10-09T01:02:27.076446322Z" level=info msg="StartContainer for \"eadb9c7874950d86e87c1963b2c081926db5b994f11c6c7116c793e71283c181\"" Oct 9 01:02:27.104330 systemd[1]: Started cri-containerd-eadb9c7874950d86e87c1963b2c081926db5b994f11c6c7116c793e71283c181.scope - libcontainer container eadb9c7874950d86e87c1963b2c081926db5b994f11c6c7116c793e71283c181. Oct 9 01:02:27.141776 containerd[1499]: time="2024-10-09T01:02:27.141713872Z" level=info msg="StartContainer for \"eadb9c7874950d86e87c1963b2c081926db5b994f11c6c7116c793e71283c181\" returns successfully" Oct 9 01:02:27.148671 systemd[1]: cri-containerd-eadb9c7874950d86e87c1963b2c081926db5b994f11c6c7116c793e71283c181.scope: Deactivated successfully. Oct 9 01:02:27.177491 containerd[1499]: time="2024-10-09T01:02:27.177419940Z" level=info msg="shim disconnected" id=eadb9c7874950d86e87c1963b2c081926db5b994f11c6c7116c793e71283c181 namespace=k8s.io Oct 9 01:02:27.177491 containerd[1499]: time="2024-10-09T01:02:27.177487466Z" level=warning msg="cleaning up after shim disconnected" id=eadb9c7874950d86e87c1963b2c081926db5b994f11c6c7116c793e71283c181 namespace=k8s.io Oct 9 01:02:27.177697 containerd[1499]: time="2024-10-09T01:02:27.177497776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:02:27.472287 kubelet[2687]: E1009 01:02:27.472252 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:27.655448 kubelet[2687]: E1009 01:02:27.655420 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:27.656087 containerd[1499]: time="2024-10-09T01:02:27.655847250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:02:28.474317 kubelet[2687]: I1009 01:02:28.474279 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6b227f-b57f-486d-9e9d-474d464236c5" path="/var/lib/kubelet/pods/dd6b227f-b57f-486d-9e9d-474d464236c5/volumes" Oct 9 01:02:29.471831 kubelet[2687]: E1009 01:02:29.471785 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:31.276007 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:46236.service - OpenSSH per-connection server daemon (10.0.0.1:46236). Oct 9 01:02:31.322268 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 46236 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:02:31.325344 sshd[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:31.330549 systemd-logind[1477]: New session 11 of user core. Oct 9 01:02:31.334567 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:02:31.462014 sshd[3714]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:31.466745 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:46236.service: Deactivated successfully. Oct 9 01:02:31.469355 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:02:31.471096 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:02:31.472517 systemd-logind[1477]: Removed session 11. Oct 9 01:02:31.472915 kubelet[2687]: E1009 01:02:31.472812 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:32.218867 containerd[1499]: time="2024-10-09T01:02:32.218825970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:32.219657 containerd[1499]: time="2024-10-09T01:02:32.219623029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 01:02:32.220794 containerd[1499]: time="2024-10-09T01:02:32.220771218Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:32.222886 containerd[1499]: time="2024-10-09T01:02:32.222860756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:32.223498 containerd[1499]: time="2024-10-09T01:02:32.223475953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.567592265s" Oct 9 01:02:32.223529 containerd[1499]: time="2024-10-09T01:02:32.223498185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 01:02:32.225226 containerd[1499]: time="2024-10-09T01:02:32.225198481Z" level=info msg="CreateContainer within sandbox \"a7180e7fe6ca0c530b1aafdbbd5891db324a94c0c0665076ad80e98e93a502e0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:02:32.240920 containerd[1499]: time="2024-10-09T01:02:32.240878219Z" level=info msg="CreateContainer within sandbox \"a7180e7fe6ca0c530b1aafdbbd5891db324a94c0c0665076ad80e98e93a502e0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d89375184fa05d031fb5dc97d69d8470e8c1632891b0b0ad6b9ce7847cd90fbe\"" Oct 9 01:02:32.241423 containerd[1499]: time="2024-10-09T01:02:32.241401443Z" level=info msg="StartContainer for \"d89375184fa05d031fb5dc97d69d8470e8c1632891b0b0ad6b9ce7847cd90fbe\"" Oct 9 01:02:32.274312 systemd[1]: Started cri-containerd-d89375184fa05d031fb5dc97d69d8470e8c1632891b0b0ad6b9ce7847cd90fbe.scope - libcontainer container d89375184fa05d031fb5dc97d69d8470e8c1632891b0b0ad6b9ce7847cd90fbe. Oct 9 01:02:32.445453 containerd[1499]: time="2024-10-09T01:02:32.445399508Z" level=info msg="StartContainer for \"d89375184fa05d031fb5dc97d69d8470e8c1632891b0b0ad6b9ce7847cd90fbe\" returns successfully" Oct 9 01:02:32.664682 kubelet[2687]: E1009 01:02:32.664640 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:33.465643 containerd[1499]: time="2024-10-09T01:02:33.465589167Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:02:33.468898 systemd[1]: cri-containerd-d89375184fa05d031fb5dc97d69d8470e8c1632891b0b0ad6b9ce7847cd90fbe.scope: Deactivated successfully. Oct 9 01:02:33.473754 kubelet[2687]: E1009 01:02:33.472397 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:33.490204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d89375184fa05d031fb5dc97d69d8470e8c1632891b0b0ad6b9ce7847cd90fbe-rootfs.mount: Deactivated successfully. Oct 9 01:02:33.544271 kubelet[2687]: I1009 01:02:33.544227 2687 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:02:33.589792 containerd[1499]: time="2024-10-09T01:02:33.589723846Z" level=info msg="shim disconnected" id=d89375184fa05d031fb5dc97d69d8470e8c1632891b0b0ad6b9ce7847cd90fbe namespace=k8s.io Oct 9 01:02:33.589792 containerd[1499]: time="2024-10-09T01:02:33.589782587Z" level=warning msg="cleaning up after shim disconnected" id=d89375184fa05d031fb5dc97d69d8470e8c1632891b0b0ad6b9ce7847cd90fbe namespace=k8s.io Oct 9 01:02:33.589792 containerd[1499]: time="2024-10-09T01:02:33.589790983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:02:33.601387 kubelet[2687]: I1009 01:02:33.600445 2687 topology_manager.go:215] "Topology Admit Handler" podUID="726ab05e-673f-4982-87fa-4374d1b69c72" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bq8dt" Oct 9 01:02:33.604363 kubelet[2687]: I1009 01:02:33.604328 2687 topology_manager.go:215] "Topology Admit Handler" podUID="4fe3c131-24db-436d-a409-b7a4a9a99add" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gx47x" Oct 9 01:02:33.605397 kubelet[2687]: I1009 01:02:33.605129 2687 topology_manager.go:215] "Topology Admit Handler" podUID="7a657548-7e20-4720-aab6-7e3bb6241198" podNamespace="calico-system" podName="calico-kube-controllers-795955cfdf-jtjng" Oct 9 01:02:33.613462 kubelet[2687]: I1009 01:02:33.613432 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/726ab05e-673f-4982-87fa-4374d1b69c72-config-volume\") pod \"coredns-7db6d8ff4d-bq8dt\" (UID: \"726ab05e-673f-4982-87fa-4374d1b69c72\") " pod="kube-system/coredns-7db6d8ff4d-bq8dt" Oct 9 01:02:33.613551 kubelet[2687]: I1009 01:02:33.613472 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smlls\" (UniqueName: \"kubernetes.io/projected/726ab05e-673f-4982-87fa-4374d1b69c72-kube-api-access-smlls\") pod \"coredns-7db6d8ff4d-bq8dt\" (UID: \"726ab05e-673f-4982-87fa-4374d1b69c72\") " pod="kube-system/coredns-7db6d8ff4d-bq8dt" Oct 9 01:02:33.614174 systemd[1]: Created slice kubepods-burstable-pod726ab05e_673f_4982_87fa_4374d1b69c72.slice - libcontainer container kubepods-burstable-pod726ab05e_673f_4982_87fa_4374d1b69c72.slice. Oct 9 01:02:33.619809 systemd[1]: Created slice kubepods-burstable-pod4fe3c131_24db_436d_a409_b7a4a9a99add.slice - libcontainer container kubepods-burstable-pod4fe3c131_24db_436d_a409_b7a4a9a99add.slice. Oct 9 01:02:33.626054 systemd[1]: Created slice kubepods-besteffort-pod7a657548_7e20_4720_aab6_7e3bb6241198.slice - libcontainer container kubepods-besteffort-pod7a657548_7e20_4720_aab6_7e3bb6241198.slice. Oct 9 01:02:33.667103 kubelet[2687]: E1009 01:02:33.667073 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:33.667693 containerd[1499]: time="2024-10-09T01:02:33.667659453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:02:33.713937 kubelet[2687]: I1009 01:02:33.713899 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fe3c131-24db-436d-a409-b7a4a9a99add-config-volume\") pod \"coredns-7db6d8ff4d-gx47x\" (UID: \"4fe3c131-24db-436d-a409-b7a4a9a99add\") " pod="kube-system/coredns-7db6d8ff4d-gx47x" Oct 9 01:02:33.714587 kubelet[2687]: I1009 01:02:33.713980 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a657548-7e20-4720-aab6-7e3bb6241198-tigera-ca-bundle\") pod \"calico-kube-controllers-795955cfdf-jtjng\" (UID: \"7a657548-7e20-4720-aab6-7e3bb6241198\") " pod="calico-system/calico-kube-controllers-795955cfdf-jtjng" Oct 9 01:02:33.714587 kubelet[2687]: I1009 01:02:33.714020 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbdtn\" (UniqueName: \"kubernetes.io/projected/7a657548-7e20-4720-aab6-7e3bb6241198-kube-api-access-jbdtn\") pod \"calico-kube-controllers-795955cfdf-jtjng\" (UID: \"7a657548-7e20-4720-aab6-7e3bb6241198\") " pod="calico-system/calico-kube-controllers-795955cfdf-jtjng" Oct 9 01:02:33.714587 kubelet[2687]: I1009 01:02:33.714059 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mz97\" (UniqueName: \"kubernetes.io/projected/4fe3c131-24db-436d-a409-b7a4a9a99add-kube-api-access-8mz97\") pod \"coredns-7db6d8ff4d-gx47x\" (UID: \"4fe3c131-24db-436d-a409-b7a4a9a99add\") " pod="kube-system/coredns-7db6d8ff4d-gx47x" Oct 9 01:02:33.917684 kubelet[2687]: E1009 01:02:33.917643 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:33.918242 containerd[1499]: time="2024-10-09T01:02:33.918177218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bq8dt,Uid:726ab05e-673f-4982-87fa-4374d1b69c72,Namespace:kube-system,Attempt:0,}" Oct 9 01:02:33.923880 kubelet[2687]: E1009 01:02:33.923849 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:33.924359 containerd[1499]: time="2024-10-09T01:02:33.924310036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gx47x,Uid:4fe3c131-24db-436d-a409-b7a4a9a99add,Namespace:kube-system,Attempt:0,}" Oct 9 01:02:33.932794 containerd[1499]: time="2024-10-09T01:02:33.932764117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795955cfdf-jtjng,Uid:7a657548-7e20-4720-aab6-7e3bb6241198,Namespace:calico-system,Attempt:0,}" Oct 9 01:02:34.286028 containerd[1499]: time="2024-10-09T01:02:34.285875285Z" level=error msg="Failed to destroy network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.287105 containerd[1499]: time="2024-10-09T01:02:34.286938082Z" level=error msg="encountered an error cleaning up failed sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.287105 containerd[1499]: time="2024-10-09T01:02:34.287010298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bq8dt,Uid:726ab05e-673f-4982-87fa-4374d1b69c72,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.287324 kubelet[2687]: E1009 01:02:34.287290 2687 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.287391 kubelet[2687]: E1009 01:02:34.287352 2687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bq8dt" Oct 9 01:02:34.287391 kubelet[2687]: E1009 01:02:34.287371 2687 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bq8dt" Oct 9 01:02:34.287477 kubelet[2687]: E1009 01:02:34.287418 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bq8dt_kube-system(726ab05e-673f-4982-87fa-4374d1b69c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bq8dt_kube-system(726ab05e-673f-4982-87fa-4374d1b69c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bq8dt" podUID="726ab05e-673f-4982-87fa-4374d1b69c72" Oct 9 01:02:34.289520 containerd[1499]: time="2024-10-09T01:02:34.289469790Z" level=error msg="Failed to destroy network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.289828 containerd[1499]: time="2024-10-09T01:02:34.289792516Z" level=error msg="encountered an error cleaning up failed sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.289873 containerd[1499]: time="2024-10-09T01:02:34.289834275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795955cfdf-jtjng,Uid:7a657548-7e20-4720-aab6-7e3bb6241198,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.290036 kubelet[2687]: E1009 01:02:34.289996 2687 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.290087 kubelet[2687]: E1009 01:02:34.290046 2687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795955cfdf-jtjng" Oct 9 01:02:34.290087 kubelet[2687]: E1009 01:02:34.290063 2687 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795955cfdf-jtjng" Oct 9 01:02:34.290139 kubelet[2687]: E1009 01:02:34.290095 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-795955cfdf-jtjng_calico-system(7a657548-7e20-4720-aab6-7e3bb6241198)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-795955cfdf-jtjng_calico-system(7a657548-7e20-4720-aab6-7e3bb6241198)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-795955cfdf-jtjng" podUID="7a657548-7e20-4720-aab6-7e3bb6241198" Oct 9 01:02:34.290518 containerd[1499]: time="2024-10-09T01:02:34.290231401Z" level=error msg="Failed to destroy network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.290925 containerd[1499]: time="2024-10-09T01:02:34.290898104Z" level=error msg="encountered an error cleaning up failed sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.290999 containerd[1499]: time="2024-10-09T01:02:34.290934783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gx47x,Uid:4fe3c131-24db-436d-a409-b7a4a9a99add,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.291664 kubelet[2687]: E1009 01:02:34.291612 2687 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.291714 kubelet[2687]: E1009 01:02:34.291671 2687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gx47x" Oct 9 01:02:34.291714 kubelet[2687]: E1009 01:02:34.291692 2687 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gx47x" Oct 9 01:02:34.291766 kubelet[2687]: E1009 01:02:34.291730 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gx47x_kube-system(4fe3c131-24db-436d-a409-b7a4a9a99add)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gx47x_kube-system(4fe3c131-24db-436d-a409-b7a4a9a99add)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gx47x" podUID="4fe3c131-24db-436d-a409-b7a4a9a99add" Oct 9 01:02:34.491349 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d-shm.mount: Deactivated successfully. Oct 9 01:02:34.669096 kubelet[2687]: I1009 01:02:34.669068 2687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:34.669636 containerd[1499]: time="2024-10-09T01:02:34.669524656Z" level=info msg="StopPodSandbox for \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\"" Oct 9 01:02:34.670006 containerd[1499]: time="2024-10-09T01:02:34.669724061Z" level=info msg="Ensure that sandbox 6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c in task-service has been cleanup successfully" Oct 9 01:02:34.671121 kubelet[2687]: I1009 01:02:34.670404 2687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:34.671211 containerd[1499]: time="2024-10-09T01:02:34.670801666Z" level=info msg="StopPodSandbox for \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\"" Oct 9 01:02:34.671310 containerd[1499]: time="2024-10-09T01:02:34.671282600Z" level=info msg="Ensure that sandbox 9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c in task-service has been cleanup successfully" Oct 9 01:02:34.671786 kubelet[2687]: I1009 01:02:34.671760 2687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:34.673022 containerd[1499]: time="2024-10-09T01:02:34.672991021Z" level=info msg="StopPodSandbox for \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\"" Oct 9 01:02:34.673219 containerd[1499]: time="2024-10-09T01:02:34.673179004Z" level=info msg="Ensure that sandbox d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d in task-service has been cleanup successfully" Oct 9 01:02:34.696871 containerd[1499]: time="2024-10-09T01:02:34.696790700Z" level=error msg="StopPodSandbox for \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\" failed" error="failed to destroy network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.697081 kubelet[2687]: E1009 01:02:34.697047 2687 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:34.697227 kubelet[2687]: E1009 01:02:34.697096 2687 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c"} Oct 9 01:02:34.697227 kubelet[2687]: E1009 01:02:34.697128 2687 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a657548-7e20-4720-aab6-7e3bb6241198\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:02:34.697227 kubelet[2687]: E1009 01:02:34.697153 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a657548-7e20-4720-aab6-7e3bb6241198\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-795955cfdf-jtjng" podUID="7a657548-7e20-4720-aab6-7e3bb6241198" Oct 9 01:02:34.699604 containerd[1499]: time="2024-10-09T01:02:34.699564804Z" level=error msg="StopPodSandbox for \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\" failed" error="failed to destroy network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.699738 kubelet[2687]: E1009 01:02:34.699714 2687 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:34.699799 kubelet[2687]: E1009 01:02:34.699742 2687 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d"} Oct 9 01:02:34.699799 kubelet[2687]: E1009 01:02:34.699770 2687 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"726ab05e-673f-4982-87fa-4374d1b69c72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:02:34.699799 kubelet[2687]: E1009 01:02:34.699787 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"726ab05e-673f-4982-87fa-4374d1b69c72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bq8dt" podUID="726ab05e-673f-4982-87fa-4374d1b69c72" Oct 9 01:02:34.701595 containerd[1499]: time="2024-10-09T01:02:34.701571495Z" level=error msg="StopPodSandbox for \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\" failed" error="failed to destroy network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:34.701698 kubelet[2687]: E1009 01:02:34.701674 2687 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:34.701745 kubelet[2687]: E1009 01:02:34.701701 2687 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c"} Oct 9 01:02:34.701745 kubelet[2687]: E1009 01:02:34.701723 2687 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fe3c131-24db-436d-a409-b7a4a9a99add\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:02:34.701745 kubelet[2687]: E1009 01:02:34.701740 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fe3c131-24db-436d-a409-b7a4a9a99add\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gx47x" podUID="4fe3c131-24db-436d-a409-b7a4a9a99add" Oct 9 01:02:35.477420 systemd[1]: Created slice kubepods-besteffort-poda4e2fd99_fdbc_431a_b2d4_cd9ba338c55a.slice - libcontainer container kubepods-besteffort-poda4e2fd99_fdbc_431a_b2d4_cd9ba338c55a.slice. Oct 9 01:02:35.479485 containerd[1499]: time="2024-10-09T01:02:35.479431516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xz24,Uid:a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a,Namespace:calico-system,Attempt:0,}" Oct 9 01:02:36.176158 containerd[1499]: time="2024-10-09T01:02:36.176112268Z" level=error msg="Failed to destroy network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:36.176669 containerd[1499]: time="2024-10-09T01:02:36.176612098Z" level=error msg="encountered an error cleaning up failed sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:36.176669 containerd[1499]: time="2024-10-09T01:02:36.176653205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xz24,Uid:a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:36.176885 kubelet[2687]: E1009 01:02:36.176838 2687 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:36.177194 kubelet[2687]: E1009 01:02:36.176896 2687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4xz24" Oct 9 01:02:36.177194 kubelet[2687]: E1009 01:02:36.176916 2687 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4xz24" Oct 9 01:02:36.177194 kubelet[2687]: E1009 01:02:36.176961 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4xz24_calico-system(a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4xz24_calico-system(a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:36.178608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512-shm.mount: Deactivated successfully. Oct 9 01:02:36.481975 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:46240.service - OpenSSH per-connection server daemon (10.0.0.1:46240). Oct 9 01:02:36.679680 kubelet[2687]: I1009 01:02:36.679566 2687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:36.680348 containerd[1499]: time="2024-10-09T01:02:36.680293280Z" level=info msg="StopPodSandbox for \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\"" Oct 9 01:02:36.681827 containerd[1499]: time="2024-10-09T01:02:36.680629152Z" level=info msg="Ensure that sandbox 439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512 in task-service has been cleanup successfully" Oct 9 01:02:36.836663 containerd[1499]: time="2024-10-09T01:02:36.836557665Z" level=error msg="StopPodSandbox for \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\" failed" error="failed to destroy network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:02:36.838072 kubelet[2687]: E1009 01:02:36.837857 2687 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:36.838072 kubelet[2687]: E1009 01:02:36.837929 2687 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512"} Oct 9 01:02:36.838072 kubelet[2687]: E1009 01:02:36.837977 2687 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:02:36.838072 kubelet[2687]: E1009 01:02:36.838016 2687 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4xz24" podUID="a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a" Oct 9 01:02:36.861983 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 46240 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:02:36.860869 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:36.872059 systemd-logind[1477]: New session 12 of user core. Oct 9 01:02:36.875451 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:02:37.073550 sshd[4022]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:37.089009 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:46240.service: Deactivated successfully. Oct 9 01:02:37.093740 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:02:37.096444 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:02:37.104608 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:49524.service - OpenSSH per-connection server daemon (10.0.0.1:49524). Oct 9 01:02:37.106789 systemd-logind[1477]: Removed session 12. Oct 9 01:02:37.147537 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 49524 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:02:37.152945 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:37.164885 systemd-logind[1477]: New session 13 of user core. Oct 9 01:02:37.183994 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:02:37.497477 sshd[4062]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:37.510926 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:49524.service: Deactivated successfully. Oct 9 01:02:37.512222 kubelet[2687]: I1009 01:02:37.511862 2687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:02:37.513451 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:02:37.514811 kubelet[2687]: E1009 01:02:37.513867 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:37.516086 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:02:37.535018 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:49540.service - OpenSSH per-connection server daemon (10.0.0.1:49540). Oct 9 01:02:37.539611 systemd-logind[1477]: Removed session 13. Oct 9 01:02:37.608940 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 49540 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:02:37.611728 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:37.631003 systemd-logind[1477]: New session 14 of user core. Oct 9 01:02:37.640619 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:02:37.683292 kubelet[2687]: E1009 01:02:37.682864 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:37.856092 sshd[4075]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:37.859408 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:02:37.859921 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:49540.service: Deactivated successfully. Oct 9 01:02:37.861901 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:02:37.864998 systemd-logind[1477]: Removed session 14. Oct 9 01:02:38.714826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451899485.mount: Deactivated successfully. Oct 9 01:02:39.529996 containerd[1499]: time="2024-10-09T01:02:39.529946756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:39.530769 containerd[1499]: time="2024-10-09T01:02:39.530731380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 01:02:39.532011 containerd[1499]: time="2024-10-09T01:02:39.531981668Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:39.534029 containerd[1499]: time="2024-10-09T01:02:39.533999327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:39.534719 containerd[1499]: time="2024-10-09T01:02:39.534660619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.866960309s" Oct 9 01:02:39.534719 containerd[1499]: time="2024-10-09T01:02:39.534712728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 01:02:39.543907 containerd[1499]: time="2024-10-09T01:02:39.543864787Z" level=info msg="CreateContainer within sandbox \"a7180e7fe6ca0c530b1aafdbbd5891db324a94c0c0665076ad80e98e93a502e0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:02:39.571280 containerd[1499]: time="2024-10-09T01:02:39.571245874Z" level=info msg="CreateContainer within sandbox \"a7180e7fe6ca0c530b1aafdbbd5891db324a94c0c0665076ad80e98e93a502e0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a930da201d59421023bc2234104c737f5ebb281e5511ad97250b23eb00a595a0\"" Oct 9 01:02:39.571680 containerd[1499]: time="2024-10-09T01:02:39.571648059Z" level=info msg="StartContainer for \"a930da201d59421023bc2234104c737f5ebb281e5511ad97250b23eb00a595a0\"" Oct 9 01:02:39.670148 systemd[1]: Started cri-containerd-a930da201d59421023bc2234104c737f5ebb281e5511ad97250b23eb00a595a0.scope - libcontainer container a930da201d59421023bc2234104c737f5ebb281e5511ad97250b23eb00a595a0. Oct 9 01:02:40.000604 containerd[1499]: time="2024-10-09T01:02:40.000551779Z" level=info msg="StartContainer for \"a930da201d59421023bc2234104c737f5ebb281e5511ad97250b23eb00a595a0\" returns successfully" Oct 9 01:02:40.061966 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:02:40.062105 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:02:40.692676 kubelet[2687]: E1009 01:02:40.692641 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:40.798765 kubelet[2687]: I1009 01:02:40.798687 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6mjsz" podStartSLOduration=2.9190227159999997 podStartE2EDuration="14.798668228s" podCreationTimestamp="2024-10-09 01:02:26 +0000 UTC" firstStartedPulling="2024-10-09 01:02:27.655681297 +0000 UTC m=+29.259572686" lastFinishedPulling="2024-10-09 01:02:39.53532681 +0000 UTC m=+41.139218198" observedRunningTime="2024-10-09 01:02:40.798636829 +0000 UTC m=+42.402528217" watchObservedRunningTime="2024-10-09 01:02:40.798668228 +0000 UTC m=+42.402559616" Oct 9 01:02:41.693998 kubelet[2687]: E1009 01:02:41.693965 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:42.145264 kernel: bpftool[4328]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:02:42.385373 systemd-networkd[1414]: vxlan.calico: Link UP Oct 9 01:02:42.385383 systemd-networkd[1414]: vxlan.calico: Gained carrier Oct 9 01:02:42.871347 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:49556.service - OpenSSH per-connection server daemon (10.0.0.1:49556). Oct 9 01:02:42.911359 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 49556 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:02:42.913020 sshd[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:42.917453 systemd-logind[1477]: New session 15 of user core. Oct 9 01:02:42.928453 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:02:43.040549 sshd[4401]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:43.044033 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:49556.service: Deactivated successfully. Oct 9 01:02:43.045846 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:02:43.046442 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:02:43.047211 systemd-logind[1477]: Removed session 15. Oct 9 01:02:43.974366 systemd-networkd[1414]: vxlan.calico: Gained IPv6LL Oct 9 01:02:46.473100 containerd[1499]: time="2024-10-09T01:02:46.473053308Z" level=info msg="StopPodSandbox for \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\"" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.532 [INFO][4440] k8s.go 608: Cleaning up netns ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.533 [INFO][4440] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" iface="eth0" netns="/var/run/netns/cni-91bea0da-8f3a-78fa-f7dc-2867295c2e76" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.533 [INFO][4440] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" iface="eth0" netns="/var/run/netns/cni-91bea0da-8f3a-78fa-f7dc-2867295c2e76" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.533 [INFO][4440] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" iface="eth0" netns="/var/run/netns/cni-91bea0da-8f3a-78fa-f7dc-2867295c2e76" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.533 [INFO][4440] k8s.go 615: Releasing IP address(es) ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.533 [INFO][4440] utils.go 188: Calico CNI releasing IP address ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.580 [INFO][4448] ipam_plugin.go 417: Releasing address using handleID ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" HandleID="k8s-pod-network.6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.581 [INFO][4448] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.581 [INFO][4448] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.587 [WARNING][4448] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" HandleID="k8s-pod-network.6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.587 [INFO][4448] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" HandleID="k8s-pod-network.6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.588 [INFO][4448] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:46.593391 containerd[1499]: 2024-10-09 01:02:46.590 [INFO][4440] k8s.go 621: Teardown processing complete. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:46.594062 containerd[1499]: time="2024-10-09T01:02:46.593555777Z" level=info msg="TearDown network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\" successfully" Oct 9 01:02:46.594062 containerd[1499]: time="2024-10-09T01:02:46.593593447Z" level=info msg="StopPodSandbox for \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\" returns successfully" Oct 9 01:02:46.594453 containerd[1499]: time="2024-10-09T01:02:46.594408577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795955cfdf-jtjng,Uid:7a657548-7e20-4720-aab6-7e3bb6241198,Namespace:calico-system,Attempt:1,}" Oct 9 01:02:46.596858 systemd[1]: run-netns-cni\x2d91bea0da\x2d8f3a\x2d78fa\x2df7dc\x2d2867295c2e76.mount: Deactivated successfully. Oct 9 01:02:46.733242 systemd-networkd[1414]: calid15269de60c: Link UP Oct 9 01:02:46.734453 systemd-networkd[1414]: calid15269de60c: Gained carrier Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.674 [INFO][4455] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0 calico-kube-controllers-795955cfdf- calico-system 7a657548-7e20-4720-aab6-7e3bb6241198 943 0 2024-10-09 01:02:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:795955cfdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-795955cfdf-jtjng eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid15269de60c [] []}} ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Namespace="calico-system" Pod="calico-kube-controllers-795955cfdf-jtjng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.674 [INFO][4455] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Namespace="calico-system" Pod="calico-kube-controllers-795955cfdf-jtjng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.698 [INFO][4469] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" HandleID="k8s-pod-network.71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.706 [INFO][4469] ipam_plugin.go 270: Auto assigning IP ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" HandleID="k8s-pod-network.71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392500), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-795955cfdf-jtjng", "timestamp":"2024-10-09 01:02:46.698166215 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.706 [INFO][4469] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.706 [INFO][4469] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.706 [INFO][4469] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.708 [INFO][4469] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" host="localhost" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.712 [INFO][4469] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.715 [INFO][4469] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.717 [INFO][4469] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.718 [INFO][4469] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.719 [INFO][4469] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" host="localhost" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.720 [INFO][4469] ipam.go 1685: Creating new handle: k8s-pod-network.71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780 Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.723 [INFO][4469] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" host="localhost" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.727 [INFO][4469] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" host="localhost" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.727 [INFO][4469] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" host="localhost" Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.728 [INFO][4469] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:46.751866 containerd[1499]: 2024-10-09 01:02:46.728 [INFO][4469] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" HandleID="k8s-pod-network.71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.752595 containerd[1499]: 2024-10-09 01:02:46.730 [INFO][4455] k8s.go 386: Populated endpoint ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Namespace="calico-system" Pod="calico-kube-controllers-795955cfdf-jtjng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0", GenerateName:"calico-kube-controllers-795955cfdf-", Namespace:"calico-system", SelfLink:"", UID:"7a657548-7e20-4720-aab6-7e3bb6241198", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795955cfdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-795955cfdf-jtjng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid15269de60c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:46.752595 containerd[1499]: 2024-10-09 01:02:46.730 [INFO][4455] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Namespace="calico-system" Pod="calico-kube-controllers-795955cfdf-jtjng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.752595 containerd[1499]: 2024-10-09 01:02:46.730 [INFO][4455] dataplane_linux.go 68: Setting the host side veth name to calid15269de60c ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Namespace="calico-system" Pod="calico-kube-controllers-795955cfdf-jtjng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.752595 containerd[1499]: 2024-10-09 01:02:46.733 [INFO][4455] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Namespace="calico-system" Pod="calico-kube-controllers-795955cfdf-jtjng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.752595 containerd[1499]: 2024-10-09 01:02:46.734 [INFO][4455] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Namespace="calico-system" Pod="calico-kube-controllers-795955cfdf-jtjng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0", GenerateName:"calico-kube-controllers-795955cfdf-", Namespace:"calico-system", SelfLink:"", UID:"7a657548-7e20-4720-aab6-7e3bb6241198", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795955cfdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780", Pod:"calico-kube-controllers-795955cfdf-jtjng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid15269de60c", MAC:"5e:04:84:03:2c:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:46.752595 containerd[1499]: 2024-10-09 01:02:46.743 [INFO][4455] k8s.go 500: Wrote updated endpoint to datastore ContainerID="71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780" Namespace="calico-system" Pod="calico-kube-controllers-795955cfdf-jtjng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:46.802409 containerd[1499]: time="2024-10-09T01:02:46.802271785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:46.803265 containerd[1499]: time="2024-10-09T01:02:46.803199055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:46.803265 containerd[1499]: time="2024-10-09T01:02:46.803239842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:46.803409 containerd[1499]: time="2024-10-09T01:02:46.803376799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:46.831403 systemd[1]: Started cri-containerd-71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780.scope - libcontainer container 71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780. Oct 9 01:02:46.843565 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:02:46.867385 containerd[1499]: time="2024-10-09T01:02:46.867345461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795955cfdf-jtjng,Uid:7a657548-7e20-4720-aab6-7e3bb6241198,Namespace:calico-system,Attempt:1,} returns sandbox id \"71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780\"" Oct 9 01:02:46.870115 containerd[1499]: time="2024-10-09T01:02:46.869163503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:02:48.056709 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:49388.service - OpenSSH per-connection server daemon (10.0.0.1:49388). Oct 9 01:02:48.096879 sshd[4540]: Accepted publickey for core from 10.0.0.1 port 49388 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:02:48.098640 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:48.103206 systemd-logind[1477]: New session 16 of user core. Oct 9 01:02:48.110348 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:02:48.227287 sshd[4540]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:48.230960 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:49388.service: Deactivated successfully. Oct 9 01:02:48.233116 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:02:48.233799 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:02:48.234671 systemd-logind[1477]: Removed session 16. Oct 9 01:02:48.473244 containerd[1499]: time="2024-10-09T01:02:48.473179092Z" level=info msg="StopPodSandbox for \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\"" Oct 9 01:02:48.710353 systemd-networkd[1414]: calid15269de60c: Gained IPv6LL Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.937 [INFO][4570] k8s.go 608: Cleaning up netns ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.938 [INFO][4570] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" iface="eth0" netns="/var/run/netns/cni-f7a92209-e3d2-03be-d754-be0b0979ae42" Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.938 [INFO][4570] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" iface="eth0" netns="/var/run/netns/cni-f7a92209-e3d2-03be-d754-be0b0979ae42" Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.938 [INFO][4570] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" iface="eth0" netns="/var/run/netns/cni-f7a92209-e3d2-03be-d754-be0b0979ae42" Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.938 [INFO][4570] k8s.go 615: Releasing IP address(es) ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.939 [INFO][4570] utils.go 188: Calico CNI releasing IP address ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.962 [INFO][4578] ipam_plugin.go 417: Releasing address using handleID ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" HandleID="k8s-pod-network.d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.963 [INFO][4578] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.963 [INFO][4578] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.969 [WARNING][4578] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" HandleID="k8s-pod-network.d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.969 [INFO][4578] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" HandleID="k8s-pod-network.d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.971 [INFO][4578] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:48.977066 containerd[1499]: 2024-10-09 01:02:48.973 [INFO][4570] k8s.go 621: Teardown processing complete. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:48.977731 containerd[1499]: time="2024-10-09T01:02:48.977306376Z" level=info msg="TearDown network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\" successfully" Oct 9 01:02:48.977731 containerd[1499]: time="2024-10-09T01:02:48.977333587Z" level=info msg="StopPodSandbox for \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\" returns successfully" Oct 9 01:02:48.977798 kubelet[2687]: E1009 01:02:48.977675 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:48.978499 containerd[1499]: time="2024-10-09T01:02:48.978444241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bq8dt,Uid:726ab05e-673f-4982-87fa-4374d1b69c72,Namespace:kube-system,Attempt:1,}" Oct 9 01:02:48.981673 systemd[1]: run-netns-cni\x2df7a92209\x2de3d2\x2d03be\x2dd754\x2dbe0b0979ae42.mount: Deactivated successfully. Oct 9 01:02:49.417808 systemd-networkd[1414]: cali693eb795f41: Link UP Oct 9 01:02:49.418026 systemd-networkd[1414]: cali693eb795f41: Gained carrier Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.345 [INFO][4589] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0 coredns-7db6d8ff4d- kube-system 726ab05e-673f-4982-87fa-4374d1b69c72 955 0 2024-10-09 01:02:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-bq8dt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali693eb795f41 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq8dt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bq8dt-" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.345 [INFO][4589] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq8dt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.373 [INFO][4604] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" HandleID="k8s-pod-network.386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.382 [INFO][4604] ipam_plugin.go 270: Auto assigning IP ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" HandleID="k8s-pod-network.386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a0530), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-bq8dt", "timestamp":"2024-10-09 01:02:49.373674485 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.382 [INFO][4604] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.382 [INFO][4604] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.382 [INFO][4604] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.384 [INFO][4604] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" host="localhost" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.388 [INFO][4604] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.393 [INFO][4604] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.394 [INFO][4604] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.399 [INFO][4604] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.399 [INFO][4604] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" host="localhost" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.401 [INFO][4604] ipam.go 1685: Creating new handle: k8s-pod-network.386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86 Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.407 [INFO][4604] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" host="localhost" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.412 [INFO][4604] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" host="localhost" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.412 [INFO][4604] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" host="localhost" Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.412 [INFO][4604] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:49.438082 containerd[1499]: 2024-10-09 01:02:49.412 [INFO][4604] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" HandleID="k8s-pod-network.386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:49.438664 containerd[1499]: 2024-10-09 01:02:49.415 [INFO][4589] k8s.go 386: Populated endpoint ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq8dt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"726ab05e-673f-4982-87fa-4374d1b69c72", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-bq8dt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali693eb795f41", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:49.438664 containerd[1499]: 2024-10-09 01:02:49.415 [INFO][4589] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq8dt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:49.438664 containerd[1499]: 2024-10-09 01:02:49.415 [INFO][4589] dataplane_linux.go 68: Setting the host side veth name to cali693eb795f41 ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq8dt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:49.438664 containerd[1499]: 2024-10-09 01:02:49.417 [INFO][4589] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq8dt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:49.438664 containerd[1499]: 2024-10-09 01:02:49.418 [INFO][4589] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq8dt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"726ab05e-673f-4982-87fa-4374d1b69c72", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86", Pod:"coredns-7db6d8ff4d-bq8dt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali693eb795f41", MAC:"fa:20:3c:df:be:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:49.438664 containerd[1499]: 2024-10-09 01:02:49.429 [INFO][4589] k8s.go 500: Wrote updated endpoint to datastore ContainerID="386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq8dt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:49.461126 containerd[1499]: time="2024-10-09T01:02:49.460915889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:49.461126 containerd[1499]: time="2024-10-09T01:02:49.460992793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:49.461467 containerd[1499]: time="2024-10-09T01:02:49.461023601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:49.461467 containerd[1499]: time="2024-10-09T01:02:49.461151031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:49.497525 systemd[1]: Started cri-containerd-386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86.scope - libcontainer container 386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86. Oct 9 01:02:49.532482 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:02:49.593074 containerd[1499]: time="2024-10-09T01:02:49.592993818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bq8dt,Uid:726ab05e-673f-4982-87fa-4374d1b69c72,Namespace:kube-system,Attempt:1,} returns sandbox id \"386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86\"" Oct 9 01:02:49.605778 kubelet[2687]: E1009 01:02:49.605722 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:49.609808 containerd[1499]: time="2024-10-09T01:02:49.609748130Z" level=info msg="CreateContainer within sandbox \"386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:02:50.031270 containerd[1499]: time="2024-10-09T01:02:50.031210057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:50.031518 containerd[1499]: time="2024-10-09T01:02:50.031481395Z" level=info msg="CreateContainer within sandbox \"386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c63a40e6625be48e64a8d7aed44225e4a01f0611e964a9a9c4a13788a9507f4\"" Oct 9 01:02:50.032313 containerd[1499]: time="2024-10-09T01:02:50.032272781Z" level=info msg="StartContainer for \"0c63a40e6625be48e64a8d7aed44225e4a01f0611e964a9a9c4a13788a9507f4\"" Oct 9 01:02:50.032672 containerd[1499]: time="2024-10-09T01:02:50.032640431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 01:02:50.034236 containerd[1499]: time="2024-10-09T01:02:50.034160384Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:50.036935 containerd[1499]: time="2024-10-09T01:02:50.036891629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:50.037628 containerd[1499]: time="2024-10-09T01:02:50.037600699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.168314525s" Oct 9 01:02:50.037710 containerd[1499]: time="2024-10-09T01:02:50.037633591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 01:02:50.056430 containerd[1499]: time="2024-10-09T01:02:50.056390219Z" level=info msg="CreateContainer within sandbox \"71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:02:50.070321 systemd[1]: Started cri-containerd-0c63a40e6625be48e64a8d7aed44225e4a01f0611e964a9a9c4a13788a9507f4.scope - libcontainer container 0c63a40e6625be48e64a8d7aed44225e4a01f0611e964a9a9c4a13788a9507f4. Oct 9 01:02:50.088063 containerd[1499]: time="2024-10-09T01:02:50.088028611Z" level=info msg="CreateContainer within sandbox \"71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4b0f6bc20b9a1adacc091a899b680664af6299a01333823715a62dbb00c8ad4a\"" Oct 9 01:02:50.088729 containerd[1499]: time="2024-10-09T01:02:50.088701294Z" level=info msg="StartContainer for \"4b0f6bc20b9a1adacc091a899b680664af6299a01333823715a62dbb00c8ad4a\"" Oct 9 01:02:50.103396 containerd[1499]: time="2024-10-09T01:02:50.103265444Z" level=info msg="StartContainer for \"0c63a40e6625be48e64a8d7aed44225e4a01f0611e964a9a9c4a13788a9507f4\" returns successfully" Oct 9 01:02:50.122379 systemd[1]: Started cri-containerd-4b0f6bc20b9a1adacc091a899b680664af6299a01333823715a62dbb00c8ad4a.scope - libcontainer container 4b0f6bc20b9a1adacc091a899b680664af6299a01333823715a62dbb00c8ad4a. Oct 9 01:02:50.209066 containerd[1499]: time="2024-10-09T01:02:50.209015582Z" level=info msg="StartContainer for \"4b0f6bc20b9a1adacc091a899b680664af6299a01333823715a62dbb00c8ad4a\" returns successfully" Oct 9 01:02:50.474806 containerd[1499]: time="2024-10-09T01:02:50.473717726Z" level=info msg="StopPodSandbox for \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\"" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.528 [INFO][4762] k8s.go 608: Cleaning up netns ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.529 [INFO][4762] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" iface="eth0" netns="/var/run/netns/cni-2f22b8d6-09d3-268b-a94b-c6d245d5cc50" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.529 [INFO][4762] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" iface="eth0" netns="/var/run/netns/cni-2f22b8d6-09d3-268b-a94b-c6d245d5cc50" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.529 [INFO][4762] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" iface="eth0" netns="/var/run/netns/cni-2f22b8d6-09d3-268b-a94b-c6d245d5cc50" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.529 [INFO][4762] k8s.go 615: Releasing IP address(es) ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.529 [INFO][4762] utils.go 188: Calico CNI releasing IP address ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.551 [INFO][4770] ipam_plugin.go 417: Releasing address using handleID ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" HandleID="k8s-pod-network.9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.551 [INFO][4770] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.551 [INFO][4770] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.555 [WARNING][4770] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" HandleID="k8s-pod-network.9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.555 [INFO][4770] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" HandleID="k8s-pod-network.9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.557 [INFO][4770] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:50.562075 containerd[1499]: 2024-10-09 01:02:50.559 [INFO][4762] k8s.go 621: Teardown processing complete. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:50.562600 containerd[1499]: time="2024-10-09T01:02:50.562299867Z" level=info msg="TearDown network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\" successfully" Oct 9 01:02:50.562600 containerd[1499]: time="2024-10-09T01:02:50.562327850Z" level=info msg="StopPodSandbox for \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\" returns successfully" Oct 9 01:02:50.563211 kubelet[2687]: E1009 01:02:50.563168 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:50.566238 containerd[1499]: time="2024-10-09T01:02:50.563896634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gx47x,Uid:4fe3c131-24db-436d-a409-b7a4a9a99add,Namespace:kube-system,Attempt:1,}" Oct 9 01:02:50.735977 systemd-networkd[1414]: cali612f478715b: Link UP Oct 9 01:02:50.737058 systemd-networkd[1414]: cali612f478715b: Gained carrier Oct 9 01:02:50.742668 kubelet[2687]: E1009 01:02:50.742638 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.653 [INFO][4778] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0 coredns-7db6d8ff4d- kube-system 4fe3c131-24db-436d-a409-b7a4a9a99add 985 0 2024-10-09 01:02:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-gx47x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali612f478715b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gx47x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gx47x-" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.653 [INFO][4778] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gx47x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.692 [INFO][4792] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" HandleID="k8s-pod-network.2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.699 [INFO][4792] ipam_plugin.go 270: Auto assigning IP ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" HandleID="k8s-pod-network.2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd260), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-gx47x", "timestamp":"2024-10-09 01:02:50.692526548 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.699 [INFO][4792] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.700 [INFO][4792] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.700 [INFO][4792] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.702 [INFO][4792] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" host="localhost" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.706 [INFO][4792] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.710 [INFO][4792] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.711 [INFO][4792] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.713 [INFO][4792] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.713 [INFO][4792] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" host="localhost" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.716 [INFO][4792] ipam.go 1685: Creating new handle: k8s-pod-network.2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.721 [INFO][4792] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" host="localhost" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.729 [INFO][4792] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" host="localhost" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.729 [INFO][4792] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" host="localhost" Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.729 [INFO][4792] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:50.757576 containerd[1499]: 2024-10-09 01:02:50.729 [INFO][4792] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" HandleID="k8s-pod-network.2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.758433 containerd[1499]: 2024-10-09 01:02:50.732 [INFO][4778] k8s.go 386: Populated endpoint ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gx47x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4fe3c131-24db-436d-a409-b7a4a9a99add", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-gx47x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali612f478715b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:50.758433 containerd[1499]: 2024-10-09 01:02:50.732 [INFO][4778] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gx47x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.758433 containerd[1499]: 2024-10-09 01:02:50.732 [INFO][4778] dataplane_linux.go 68: Setting the host side veth name to cali612f478715b ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gx47x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.758433 containerd[1499]: 2024-10-09 01:02:50.738 [INFO][4778] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gx47x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.758433 containerd[1499]: 2024-10-09 01:02:50.739 [INFO][4778] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gx47x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4fe3c131-24db-436d-a409-b7a4a9a99add", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed", Pod:"coredns-7db6d8ff4d-gx47x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali612f478715b", MAC:"96:e4:69:e1:39:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:50.758433 containerd[1499]: 2024-10-09 01:02:50.749 [INFO][4778] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gx47x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:50.758347 systemd-networkd[1414]: cali693eb795f41: Gained IPv6LL Oct 9 01:02:50.774165 kubelet[2687]: I1009 01:02:50.774104 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bq8dt" podStartSLOduration=37.774087584 podStartE2EDuration="37.774087584s" podCreationTimestamp="2024-10-09 01:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:02:50.773512895 +0000 UTC m=+52.377404273" watchObservedRunningTime="2024-10-09 01:02:50.774087584 +0000 UTC m=+52.377978962" Oct 9 01:02:50.819706 containerd[1499]: time="2024-10-09T01:02:50.819555277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:50.819706 containerd[1499]: time="2024-10-09T01:02:50.819633574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:50.819706 containerd[1499]: time="2024-10-09T01:02:50.819646949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:50.819877 containerd[1499]: time="2024-10-09T01:02:50.819755212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:50.838354 systemd[1]: Started cri-containerd-2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed.scope - libcontainer container 2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed. Oct 9 01:02:50.850972 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:02:50.873444 containerd[1499]: time="2024-10-09T01:02:50.873381406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gx47x,Uid:4fe3c131-24db-436d-a409-b7a4a9a99add,Namespace:kube-system,Attempt:1,} returns sandbox id \"2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed\"" Oct 9 01:02:50.874224 kubelet[2687]: E1009 01:02:50.874202 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:50.876206 containerd[1499]: time="2024-10-09T01:02:50.876162074Z" level=info msg="CreateContainer within sandbox \"2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:02:50.973703 kubelet[2687]: I1009 01:02:50.973263 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-795955cfdf-jtjng" podStartSLOduration=27.803230321 podStartE2EDuration="30.973249275s" podCreationTimestamp="2024-10-09 01:02:20 +0000 UTC" firstStartedPulling="2024-10-09 01:02:46.86874038 +0000 UTC m=+48.472631768" lastFinishedPulling="2024-10-09 01:02:50.038759334 +0000 UTC m=+51.642650722" observedRunningTime="2024-10-09 01:02:50.972738357 +0000 UTC m=+52.576629745" watchObservedRunningTime="2024-10-09 01:02:50.973249275 +0000 UTC m=+52.577140663" Oct 9 01:02:50.982821 containerd[1499]: time="2024-10-09T01:02:50.982759522Z" level=info msg="CreateContainer within sandbox \"2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30961b31945d52916ed6a05ee7685bbcb3e9b24b0ddd0404943ff5648426f98d\"" Oct 9 01:02:50.984282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386477472.mount: Deactivated successfully. Oct 9 01:02:50.985558 containerd[1499]: time="2024-10-09T01:02:50.985134709Z" level=info msg="StartContainer for \"30961b31945d52916ed6a05ee7685bbcb3e9b24b0ddd0404943ff5648426f98d\"" Oct 9 01:02:50.985750 systemd[1]: run-netns-cni\x2d2f22b8d6\x2d09d3\x2d268b\x2da94b\x2dc6d245d5cc50.mount: Deactivated successfully. Oct 9 01:02:51.026151 systemd[1]: run-containerd-runc-k8s.io-30961b31945d52916ed6a05ee7685bbcb3e9b24b0ddd0404943ff5648426f98d-runc.gMh8Ty.mount: Deactivated successfully. Oct 9 01:02:51.037671 systemd[1]: Started cri-containerd-30961b31945d52916ed6a05ee7685bbcb3e9b24b0ddd0404943ff5648426f98d.scope - libcontainer container 30961b31945d52916ed6a05ee7685bbcb3e9b24b0ddd0404943ff5648426f98d. Oct 9 01:02:51.067500 containerd[1499]: time="2024-10-09T01:02:51.067452709Z" level=info msg="StartContainer for \"30961b31945d52916ed6a05ee7685bbcb3e9b24b0ddd0404943ff5648426f98d\" returns successfully" Oct 9 01:02:51.473461 containerd[1499]: time="2024-10-09T01:02:51.473394620Z" level=info msg="StopPodSandbox for \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\"" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.516 [INFO][4936] k8s.go 608: Cleaning up netns ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.516 [INFO][4936] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" iface="eth0" netns="/var/run/netns/cni-bf750cd5-47a2-5c6e-610d-33c4c01fbe00" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.516 [INFO][4936] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" iface="eth0" netns="/var/run/netns/cni-bf750cd5-47a2-5c6e-610d-33c4c01fbe00" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.516 [INFO][4936] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" iface="eth0" netns="/var/run/netns/cni-bf750cd5-47a2-5c6e-610d-33c4c01fbe00" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.516 [INFO][4936] k8s.go 615: Releasing IP address(es) ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.516 [INFO][4936] utils.go 188: Calico CNI releasing IP address ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.535 [INFO][4944] ipam_plugin.go 417: Releasing address using handleID ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" HandleID="k8s-pod-network.439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.535 [INFO][4944] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.535 [INFO][4944] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.540 [WARNING][4944] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" HandleID="k8s-pod-network.439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.540 [INFO][4944] ipam_plugin.go 445: Releasing address using workloadID ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" HandleID="k8s-pod-network.439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.541 [INFO][4944] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:51.546139 containerd[1499]: 2024-10-09 01:02:51.543 [INFO][4936] k8s.go 621: Teardown processing complete. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:51.546799 containerd[1499]: time="2024-10-09T01:02:51.546762514Z" level=info msg="TearDown network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\" successfully" Oct 9 01:02:51.546799 containerd[1499]: time="2024-10-09T01:02:51.546790657Z" level=info msg="StopPodSandbox for \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\" returns successfully" Oct 9 01:02:51.547575 containerd[1499]: time="2024-10-09T01:02:51.547550733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xz24,Uid:a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a,Namespace:calico-system,Attempt:1,}" Oct 9 01:02:51.550052 systemd[1]: run-netns-cni\x2dbf750cd5\x2d47a2\x2d5c6e\x2d610d\x2d33c4c01fbe00.mount: Deactivated successfully. Oct 9 01:02:51.748226 kubelet[2687]: E1009 01:02:51.747937 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:51.749294 kubelet[2687]: E1009 01:02:51.748283 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:51.797422 systemd-networkd[1414]: calid3d5bb939c0: Link UP Oct 9 01:02:51.797704 systemd-networkd[1414]: calid3d5bb939c0: Gained carrier Oct 9 01:02:51.872208 kubelet[2687]: I1009 01:02:51.872126 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gx47x" podStartSLOduration=38.872111584 podStartE2EDuration="38.872111584s" podCreationTimestamp="2024-10-09 01:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:02:51.870835059 +0000 UTC m=+53.474726447" watchObservedRunningTime="2024-10-09 01:02:51.872111584 +0000 UTC m=+53.476002972" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.594 [INFO][4952] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4xz24-eth0 csi-node-driver- calico-system a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a 1014 0 2024-10-09 01:02:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-4xz24 eth0 default [] [] [kns.calico-system ksa.calico-system.default] calid3d5bb939c0 [] []}} ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Namespace="calico-system" Pod="csi-node-driver-4xz24" WorkloadEndpoint="localhost-k8s-csi--node--driver--4xz24-" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.594 [INFO][4952] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Namespace="calico-system" Pod="csi-node-driver-4xz24" WorkloadEndpoint="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.620 [INFO][4964] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" HandleID="k8s-pod-network.bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.629 [INFO][4964] ipam_plugin.go 270: Auto assigning IP ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" HandleID="k8s-pod-network.bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000683c40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4xz24", "timestamp":"2024-10-09 01:02:51.620983909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.629 [INFO][4964] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.630 [INFO][4964] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.630 [INFO][4964] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.632 [INFO][4964] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" host="localhost" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.635 [INFO][4964] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.639 [INFO][4964] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.640 [INFO][4964] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.651 [INFO][4964] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.651 [INFO][4964] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" host="localhost" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.652 [INFO][4964] ipam.go 1685: Creating new handle: k8s-pod-network.bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08 Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.671 [INFO][4964] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" host="localhost" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.789 [INFO][4964] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" host="localhost" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.789 [INFO][4964] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" host="localhost" Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.789 [INFO][4964] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:51.876348 containerd[1499]: 2024-10-09 01:02:51.789 [INFO][4964] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" HandleID="k8s-pod-network.bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.877551 containerd[1499]: 2024-10-09 01:02:51.794 [INFO][4952] k8s.go 386: Populated endpoint ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Namespace="calico-system" Pod="csi-node-driver-4xz24" WorkloadEndpoint="localhost-k8s-csi--node--driver--4xz24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4xz24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4xz24", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid3d5bb939c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:51.877551 containerd[1499]: 2024-10-09 01:02:51.794 [INFO][4952] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Namespace="calico-system" Pod="csi-node-driver-4xz24" WorkloadEndpoint="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.877551 containerd[1499]: 2024-10-09 01:02:51.794 [INFO][4952] dataplane_linux.go 68: Setting the host side veth name to calid3d5bb939c0 ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Namespace="calico-system" Pod="csi-node-driver-4xz24" WorkloadEndpoint="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.877551 containerd[1499]: 2024-10-09 01:02:51.797 [INFO][4952] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Namespace="calico-system" Pod="csi-node-driver-4xz24" WorkloadEndpoint="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.877551 containerd[1499]: 2024-10-09 01:02:51.797 [INFO][4952] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Namespace="calico-system" Pod="csi-node-driver-4xz24" WorkloadEndpoint="localhost-k8s-csi--node--driver--4xz24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4xz24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08", Pod:"csi-node-driver-4xz24", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid3d5bb939c0", MAC:"5a:62:49:81:67:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:51.877551 containerd[1499]: 2024-10-09 01:02:51.871 [INFO][4952] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08" Namespace="calico-system" Pod="csi-node-driver-4xz24" WorkloadEndpoint="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:51.952864 containerd[1499]: time="2024-10-09T01:02:51.952536820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:51.952864 containerd[1499]: time="2024-10-09T01:02:51.952600840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:51.952864 containerd[1499]: time="2024-10-09T01:02:51.952617662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:51.952864 containerd[1499]: time="2024-10-09T01:02:51.952717900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:51.974332 systemd[1]: Started cri-containerd-bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08.scope - libcontainer container bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08. Oct 9 01:02:51.990372 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:02:52.000842 containerd[1499]: time="2024-10-09T01:02:52.000733540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xz24,Uid:a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08\"" Oct 9 01:02:52.002469 containerd[1499]: time="2024-10-09T01:02:52.002446104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:02:52.679570 systemd-networkd[1414]: cali612f478715b: Gained IPv6LL Oct 9 01:02:52.750830 kubelet[2687]: E1009 01:02:52.750507 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:52.750830 kubelet[2687]: E1009 01:02:52.750647 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:52.998319 systemd-networkd[1414]: calid3d5bb939c0: Gained IPv6LL Oct 9 01:02:53.245421 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:49394.service - OpenSSH per-connection server daemon (10.0.0.1:49394). Oct 9 01:02:53.339772 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 49394 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:02:53.345719 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:53.351237 systemd-logind[1477]: New session 17 of user core. Oct 9 01:02:53.357831 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:02:53.446567 containerd[1499]: time="2024-10-09T01:02:53.446508152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:53.447516 containerd[1499]: time="2024-10-09T01:02:53.447450110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 01:02:53.448594 containerd[1499]: time="2024-10-09T01:02:53.448550554Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:53.452213 containerd[1499]: time="2024-10-09T01:02:53.450747977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:53.452213 containerd[1499]: time="2024-10-09T01:02:53.451337994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.448863256s" Oct 9 01:02:53.452213 containerd[1499]: time="2024-10-09T01:02:53.451362691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 01:02:53.454076 containerd[1499]: time="2024-10-09T01:02:53.454041096Z" level=info msg="CreateContainer within sandbox \"bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:02:53.507857 containerd[1499]: time="2024-10-09T01:02:53.507804171Z" level=info msg="CreateContainer within sandbox \"bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0a414506ca9c4363e86be44a991bde119988b676a707d4e08be1e67a5e51d9b7\"" Oct 9 01:02:53.508808 containerd[1499]: time="2024-10-09T01:02:53.508778891Z" level=info msg="StartContainer for \"0a414506ca9c4363e86be44a991bde119988b676a707d4e08be1e67a5e51d9b7\"" Oct 9 01:02:53.529062 sshd[5045]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:53.534865 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:49394.service: Deactivated successfully. Oct 9 01:02:53.537586 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:02:53.538562 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:02:53.539893 systemd-logind[1477]: Removed session 17. Oct 9 01:02:53.555332 systemd[1]: Started cri-containerd-0a414506ca9c4363e86be44a991bde119988b676a707d4e08be1e67a5e51d9b7.scope - libcontainer container 0a414506ca9c4363e86be44a991bde119988b676a707d4e08be1e67a5e51d9b7. Oct 9 01:02:53.675718 containerd[1499]: time="2024-10-09T01:02:53.675611468Z" level=info msg="StartContainer for \"0a414506ca9c4363e86be44a991bde119988b676a707d4e08be1e67a5e51d9b7\" returns successfully" Oct 9 01:02:53.676946 containerd[1499]: time="2024-10-09T01:02:53.676645278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:02:53.753495 kubelet[2687]: E1009 01:02:53.753446 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:02:56.153057 containerd[1499]: time="2024-10-09T01:02:56.153004696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:56.154117 containerd[1499]: time="2024-10-09T01:02:56.154072318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 01:02:56.167300 containerd[1499]: time="2024-10-09T01:02:56.167249289Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:56.170308 containerd[1499]: time="2024-10-09T01:02:56.170267411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:56.170876 containerd[1499]: time="2024-10-09T01:02:56.170842691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.494167677s" Oct 9 01:02:56.170934 containerd[1499]: time="2024-10-09T01:02:56.170875893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 01:02:56.172899 containerd[1499]: time="2024-10-09T01:02:56.172872919Z" level=info msg="CreateContainer within sandbox \"bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:02:56.190281 containerd[1499]: time="2024-10-09T01:02:56.190226396Z" level=info msg="CreateContainer within sandbox \"bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9f0e9bca9bc6b95fb276d91ee265c0a2c08085d2f1b8599216fa6413cbeb3ad6\"" Oct 9 01:02:56.191056 containerd[1499]: time="2024-10-09T01:02:56.191000669Z" level=info msg="StartContainer for \"9f0e9bca9bc6b95fb276d91ee265c0a2c08085d2f1b8599216fa6413cbeb3ad6\"" Oct 9 01:02:56.220372 systemd[1]: Started cri-containerd-9f0e9bca9bc6b95fb276d91ee265c0a2c08085d2f1b8599216fa6413cbeb3ad6.scope - libcontainer container 9f0e9bca9bc6b95fb276d91ee265c0a2c08085d2f1b8599216fa6413cbeb3ad6. Oct 9 01:02:56.326312 containerd[1499]: time="2024-10-09T01:02:56.326267409Z" level=info msg="StartContainer for \"9f0e9bca9bc6b95fb276d91ee265c0a2c08085d2f1b8599216fa6413cbeb3ad6\" returns successfully" Oct 9 01:02:56.546356 kubelet[2687]: I1009 01:02:56.546232 2687 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:02:56.546356 kubelet[2687]: I1009 01:02:56.546277 2687 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:02:56.830915 kubelet[2687]: I1009 01:02:56.830759 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4xz24" podStartSLOduration=33.661285552 podStartE2EDuration="37.830744633s" podCreationTimestamp="2024-10-09 01:02:19 +0000 UTC" firstStartedPulling="2024-10-09 01:02:52.002256137 +0000 UTC m=+53.606147525" lastFinishedPulling="2024-10-09 01:02:56.171715218 +0000 UTC m=+57.775606606" observedRunningTime="2024-10-09 01:02:56.830091658 +0000 UTC m=+58.433983046" watchObservedRunningTime="2024-10-09 01:02:56.830744633 +0000 UTC m=+58.434636022" Oct 9 01:02:58.465855 containerd[1499]: time="2024-10-09T01:02:58.465813852Z" level=info msg="StopPodSandbox for \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\"" Oct 9 01:02:58.466267 containerd[1499]: time="2024-10-09T01:02:58.465914601Z" level=info msg="TearDown network for sandbox \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\" successfully" Oct 9 01:02:58.466267 containerd[1499]: time="2024-10-09T01:02:58.465924700Z" level=info msg="StopPodSandbox for \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\" returns successfully" Oct 9 01:02:58.470863 containerd[1499]: time="2024-10-09T01:02:58.470782993Z" level=info msg="RemovePodSandbox for \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\"" Oct 9 01:02:58.483527 containerd[1499]: time="2024-10-09T01:02:58.483481925Z" level=info msg="Forcibly stopping sandbox \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\"" Oct 9 01:02:58.483645 containerd[1499]: time="2024-10-09T01:02:58.483588946Z" level=info msg="TearDown network for sandbox \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\" successfully" Oct 9 01:02:58.497770 containerd[1499]: time="2024-10-09T01:02:58.497712892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:02:58.497821 containerd[1499]: time="2024-10-09T01:02:58.497805115Z" level=info msg="RemovePodSandbox \"7e9368a18f0cc2b5c3a3edcd72dc6c2b85f99d9857fb71daef21fd177fc393ca\" returns successfully" Oct 9 01:02:58.499716 containerd[1499]: time="2024-10-09T01:02:58.499365362Z" level=info msg="StopPodSandbox for \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\"" Oct 9 01:02:58.549552 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:54376.service - OpenSSH per-connection server daemon (10.0.0.1:54376). Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.533 [WARNING][5160] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4fe3c131-24db-436d-a409-b7a4a9a99add", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed", Pod:"coredns-7db6d8ff4d-gx47x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali612f478715b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.534 [INFO][5160] k8s.go 608: Cleaning up netns ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.534 [INFO][5160] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" iface="eth0" netns="" Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.534 [INFO][5160] k8s.go 615: Releasing IP address(es) ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.534 [INFO][5160] utils.go 188: Calico CNI releasing IP address ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.558 [INFO][5169] ipam_plugin.go 417: Releasing address using handleID ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" HandleID="k8s-pod-network.9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.558 [INFO][5169] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.558 [INFO][5169] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.563 [WARNING][5169] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" HandleID="k8s-pod-network.9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.563 [INFO][5169] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" HandleID="k8s-pod-network.9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.564 [INFO][5169] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:58.570127 containerd[1499]: 2024-10-09 01:02:58.567 [INFO][5160] k8s.go 621: Teardown processing complete. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:58.570127 containerd[1499]: time="2024-10-09T01:02:58.569975663Z" level=info msg="TearDown network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\" successfully" Oct 9 01:02:58.570127 containerd[1499]: time="2024-10-09T01:02:58.570002694Z" level=info msg="StopPodSandbox for \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\" returns successfully" Oct 9 01:02:58.571161 containerd[1499]: time="2024-10-09T01:02:58.571126842Z" level=info msg="RemovePodSandbox for \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\"" Oct 9 01:02:58.571249 containerd[1499]: time="2024-10-09T01:02:58.571168611Z" level=info msg="Forcibly stopping sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\"" Oct 9 01:02:58.585570 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 54376 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:02:58.587461 sshd[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:58.592264 systemd-logind[1477]: New session 18 of user core. Oct 9 01:02:58.599437 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.612 [WARNING][5194] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4fe3c131-24db-436d-a409-b7a4a9a99add", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fd73477ee700fb48f6452de234c74e01f829df3052f1554f04815455b5364ed", Pod:"coredns-7db6d8ff4d-gx47x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali612f478715b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.612 [INFO][5194] k8s.go 608: Cleaning up netns ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.612 [INFO][5194] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" iface="eth0" netns="" Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.612 [INFO][5194] k8s.go 615: Releasing IP address(es) ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.612 [INFO][5194] utils.go 188: Calico CNI releasing IP address ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.635 [INFO][5203] ipam_plugin.go 417: Releasing address using handleID ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" HandleID="k8s-pod-network.9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.635 [INFO][5203] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.635 [INFO][5203] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.640 [WARNING][5203] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" HandleID="k8s-pod-network.9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.640 [INFO][5203] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" HandleID="k8s-pod-network.9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Workload="localhost-k8s-coredns--7db6d8ff4d--gx47x-eth0" Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.642 [INFO][5203] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:58.649289 containerd[1499]: 2024-10-09 01:02:58.646 [INFO][5194] k8s.go 621: Teardown processing complete. ContainerID="9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c" Oct 9 01:02:58.649703 containerd[1499]: time="2024-10-09T01:02:58.649352543Z" level=info msg="TearDown network for sandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\" successfully" Oct 9 01:02:58.653378 containerd[1499]: time="2024-10-09T01:02:58.653343389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:02:58.653451 containerd[1499]: time="2024-10-09T01:02:58.653407439Z" level=info msg="RemovePodSandbox \"9a5396141cd785d6d6c98c40dbfd76292d49365988b0064324fc9f293f14814c\" returns successfully" Oct 9 01:02:58.654159 containerd[1499]: time="2024-10-09T01:02:58.653914235Z" level=info msg="StopPodSandbox for \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\"" Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.693 [WARNING][5236] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"726ab05e-673f-4982-87fa-4374d1b69c72", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86", Pod:"coredns-7db6d8ff4d-bq8dt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali693eb795f41", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.694 [INFO][5236] k8s.go 608: Cleaning up netns ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.694 [INFO][5236] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" iface="eth0" netns="" Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.694 [INFO][5236] k8s.go 615: Releasing IP address(es) ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.694 [INFO][5236] utils.go 188: Calico CNI releasing IP address ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.715 [INFO][5246] ipam_plugin.go 417: Releasing address using handleID ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" HandleID="k8s-pod-network.d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.715 [INFO][5246] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.715 [INFO][5246] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.721 [WARNING][5246] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" HandleID="k8s-pod-network.d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.721 [INFO][5246] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" HandleID="k8s-pod-network.d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.722 [INFO][5246] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:58.727712 containerd[1499]: 2024-10-09 01:02:58.725 [INFO][5236] k8s.go 621: Teardown processing complete. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:58.727712 containerd[1499]: time="2024-10-09T01:02:58.727649263Z" level=info msg="TearDown network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\" successfully" Oct 9 01:02:58.727712 containerd[1499]: time="2024-10-09T01:02:58.727684138Z" level=info msg="StopPodSandbox for \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\" returns successfully" Oct 9 01:02:58.728354 containerd[1499]: time="2024-10-09T01:02:58.728225077Z" level=info msg="RemovePodSandbox for \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\"" Oct 9 01:02:58.728354 containerd[1499]: time="2024-10-09T01:02:58.728256997Z" level=info msg="Forcibly stopping sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\"" Oct 9 01:02:58.728693 sshd[5174]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:58.733398 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:54376.service: Deactivated successfully. Oct 9 01:02:58.736392 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:02:58.737265 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:02:58.738434 systemd-logind[1477]: Removed session 18. Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.764 [WARNING][5270] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"726ab05e-673f-4982-87fa-4374d1b69c72", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"386e35aea44d145e792a019192ad6b284fbff570536c68b97aded7882592ba86", Pod:"coredns-7db6d8ff4d-bq8dt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali693eb795f41", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.764 [INFO][5270] k8s.go 608: Cleaning up netns ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.764 [INFO][5270] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" iface="eth0" netns="" Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.764 [INFO][5270] k8s.go 615: Releasing IP address(es) ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.764 [INFO][5270] utils.go 188: Calico CNI releasing IP address ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.784 [INFO][5278] ipam_plugin.go 417: Releasing address using handleID ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" HandleID="k8s-pod-network.d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.784 [INFO][5278] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.784 [INFO][5278] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.789 [WARNING][5278] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" HandleID="k8s-pod-network.d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.789 [INFO][5278] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" HandleID="k8s-pod-network.d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Workload="localhost-k8s-coredns--7db6d8ff4d--bq8dt-eth0" Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.790 [INFO][5278] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:58.795441 containerd[1499]: 2024-10-09 01:02:58.793 [INFO][5270] k8s.go 621: Teardown processing complete. ContainerID="d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d" Oct 9 01:02:58.795932 containerd[1499]: time="2024-10-09T01:02:58.795486434Z" level=info msg="TearDown network for sandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\" successfully" Oct 9 01:02:58.799703 containerd[1499]: time="2024-10-09T01:02:58.799673549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:02:58.799748 containerd[1499]: time="2024-10-09T01:02:58.799727281Z" level=info msg="RemovePodSandbox \"d9a296bf049aae2e19666fdfb5135c14cbfe54916c70d20b98bb30097715ee2d\" returns successfully" Oct 9 01:02:58.800276 containerd[1499]: time="2024-10-09T01:02:58.800218155Z" level=info msg="StopPodSandbox for \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\"" Oct 9 01:02:58.800405 containerd[1499]: time="2024-10-09T01:02:58.800343411Z" level=info msg="TearDown network for sandbox \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\" successfully" Oct 9 01:02:58.800405 containerd[1499]: time="2024-10-09T01:02:58.800355173Z" level=info msg="StopPodSandbox for \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\" returns successfully" Oct 9 01:02:58.801479 containerd[1499]: time="2024-10-09T01:02:58.800667061Z" level=info msg="RemovePodSandbox for \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\"" Oct 9 01:02:58.801479 containerd[1499]: time="2024-10-09T01:02:58.800690775Z" level=info msg="Forcibly stopping sandbox \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\"" Oct 9 01:02:58.801479 containerd[1499]: time="2024-10-09T01:02:58.800749385Z" level=info msg="TearDown network for sandbox \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\" successfully" Oct 9 01:02:58.805160 containerd[1499]: time="2024-10-09T01:02:58.805128322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:02:58.805216 containerd[1499]: time="2024-10-09T01:02:58.805177334Z" level=info msg="RemovePodSandbox \"ff3ed4a8035002796af44260a883a8594c78c53d4268614598a806ba1049db56\" returns successfully" Oct 9 01:02:58.805472 containerd[1499]: time="2024-10-09T01:02:58.805451030Z" level=info msg="StopPodSandbox for \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\"" Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.841 [WARNING][5301] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4xz24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08", Pod:"csi-node-driver-4xz24", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid3d5bb939c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.841 [INFO][5301] k8s.go 608: Cleaning up netns ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.841 [INFO][5301] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" iface="eth0" netns="" Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.841 [INFO][5301] k8s.go 615: Releasing IP address(es) ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.841 [INFO][5301] utils.go 188: Calico CNI releasing IP address ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.863 [INFO][5309] ipam_plugin.go 417: Releasing address using handleID ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" HandleID="k8s-pod-network.439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.863 [INFO][5309] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.863 [INFO][5309] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.867 [WARNING][5309] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" HandleID="k8s-pod-network.439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.867 [INFO][5309] ipam_plugin.go 445: Releasing address using workloadID ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" HandleID="k8s-pod-network.439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.868 [INFO][5309] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:58.873726 containerd[1499]: 2024-10-09 01:02:58.871 [INFO][5301] k8s.go 621: Teardown processing complete. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:58.874156 containerd[1499]: time="2024-10-09T01:02:58.873773906Z" level=info msg="TearDown network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\" successfully" Oct 9 01:02:58.874156 containerd[1499]: time="2024-10-09T01:02:58.873817257Z" level=info msg="StopPodSandbox for \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\" returns successfully" Oct 9 01:02:58.874352 containerd[1499]: time="2024-10-09T01:02:58.874331836Z" level=info msg="RemovePodSandbox for \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\"" Oct 9 01:02:58.874397 containerd[1499]: time="2024-10-09T01:02:58.874360010Z" level=info msg="Forcibly stopping sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\"" Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.906 [WARNING][5331] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4xz24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4e2fd99-fdbc-431a-b2d4-cd9ba338c55a", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc3685697d5ba74ddb2ab3d018ecc10642508a91fc16e4abde56edc24d593a08", Pod:"csi-node-driver-4xz24", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid3d5bb939c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.906 [INFO][5331] k8s.go 608: Cleaning up netns ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.906 [INFO][5331] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" iface="eth0" netns="" Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.906 [INFO][5331] k8s.go 615: Releasing IP address(es) ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.906 [INFO][5331] utils.go 188: Calico CNI releasing IP address ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.924 [INFO][5339] ipam_plugin.go 417: Releasing address using handleID ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" HandleID="k8s-pod-network.439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.924 [INFO][5339] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.924 [INFO][5339] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.929 [WARNING][5339] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" HandleID="k8s-pod-network.439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.929 [INFO][5339] ipam_plugin.go 445: Releasing address using workloadID ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" HandleID="k8s-pod-network.439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Workload="localhost-k8s-csi--node--driver--4xz24-eth0" Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.930 [INFO][5339] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:58.935332 containerd[1499]: 2024-10-09 01:02:58.933 [INFO][5331] k8s.go 621: Teardown processing complete. ContainerID="439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512" Oct 9 01:02:58.935726 containerd[1499]: time="2024-10-09T01:02:58.935357079Z" level=info msg="TearDown network for sandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\" successfully" Oct 9 01:02:58.939732 containerd[1499]: time="2024-10-09T01:02:58.939699237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:02:58.939789 containerd[1499]: time="2024-10-09T01:02:58.939747058Z" level=info msg="RemovePodSandbox \"439779ee2928c4bc410227d6792ac0fce4157c3311f706cabcea11f7d8e16512\" returns successfully" Oct 9 01:02:58.940278 containerd[1499]: time="2024-10-09T01:02:58.940243953Z" level=info msg="StopPodSandbox for \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\"" Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.971 [WARNING][5361] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0", GenerateName:"calico-kube-controllers-795955cfdf-", Namespace:"calico-system", SelfLink:"", UID:"7a657548-7e20-4720-aab6-7e3bb6241198", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795955cfdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780", Pod:"calico-kube-controllers-795955cfdf-jtjng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid15269de60c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.972 [INFO][5361] k8s.go 608: Cleaning up netns ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.972 [INFO][5361] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" iface="eth0" netns="" Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.972 [INFO][5361] k8s.go 615: Releasing IP address(es) ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.972 [INFO][5361] utils.go 188: Calico CNI releasing IP address ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.991 [INFO][5369] ipam_plugin.go 417: Releasing address using handleID ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" HandleID="k8s-pod-network.6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.992 [INFO][5369] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.992 [INFO][5369] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.996 [WARNING][5369] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" HandleID="k8s-pod-network.6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.996 [INFO][5369] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" HandleID="k8s-pod-network.6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:58.997 [INFO][5369] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:59.002939 containerd[1499]: 2024-10-09 01:02:59.000 [INFO][5361] k8s.go 621: Teardown processing complete. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:59.002939 containerd[1499]: time="2024-10-09T01:02:59.002904701Z" level=info msg="TearDown network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\" successfully" Oct 9 01:02:59.002939 containerd[1499]: time="2024-10-09T01:02:59.002932264Z" level=info msg="StopPodSandbox for \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\" returns successfully" Oct 9 01:02:59.003525 containerd[1499]: time="2024-10-09T01:02:59.003451671Z" level=info msg="RemovePodSandbox for \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\"" Oct 9 01:02:59.003525 containerd[1499]: time="2024-10-09T01:02:59.003477732Z" level=info msg="Forcibly stopping sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\"" Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.036 [WARNING][5392] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0", GenerateName:"calico-kube-controllers-795955cfdf-", Namespace:"calico-system", SelfLink:"", UID:"7a657548-7e20-4720-aab6-7e3bb6241198", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795955cfdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71ea594940819a60cda852264b366ab021fdb3f9ab3bc192ba4ecc23da7fb780", Pod:"calico-kube-controllers-795955cfdf-jtjng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid15269de60c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.036 [INFO][5392] k8s.go 608: Cleaning up netns ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.036 [INFO][5392] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" iface="eth0" netns="" Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.036 [INFO][5392] k8s.go 615: Releasing IP address(es) ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.036 [INFO][5392] utils.go 188: Calico CNI releasing IP address ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.057 [INFO][5399] ipam_plugin.go 417: Releasing address using handleID ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" HandleID="k8s-pod-network.6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.057 [INFO][5399] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.057 [INFO][5399] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.061 [WARNING][5399] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" HandleID="k8s-pod-network.6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.061 [INFO][5399] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" HandleID="k8s-pod-network.6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Workload="localhost-k8s-calico--kube--controllers--795955cfdf--jtjng-eth0" Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.063 [INFO][5399] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:59.068160 containerd[1499]: 2024-10-09 01:02:59.065 [INFO][5392] k8s.go 621: Teardown processing complete. ContainerID="6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c" Oct 9 01:02:59.068598 containerd[1499]: time="2024-10-09T01:02:59.068222373Z" level=info msg="TearDown network for sandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\" successfully" Oct 9 01:02:59.072586 containerd[1499]: time="2024-10-09T01:02:59.072561433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:02:59.072641 containerd[1499]: time="2024-10-09T01:02:59.072611821Z" level=info msg="RemovePodSandbox \"6973a9865a8693b3e469b3cb719dc1cd74a9275855bb1b03bfdbe2050a3c2b6c\" returns successfully" Oct 9 01:03:03.690004 kubelet[2687]: E1009 01:03:03.689039 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:03:03.744134 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:54390.service - OpenSSH per-connection server daemon (10.0.0.1:54390). Oct 9 01:03:03.798231 sshd[5443]: Accepted publickey for core from 10.0.0.1 port 54390 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:03:03.800554 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:03.816843 systemd-logind[1477]: New session 19 of user core. Oct 9 01:03:03.822557 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:03:04.039463 sshd[5443]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:04.060456 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:54390.service: Deactivated successfully. Oct 9 01:03:04.064887 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:03:04.069865 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:03:04.093269 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:54394.service - OpenSSH per-connection server daemon (10.0.0.1:54394). Oct 9 01:03:04.103037 systemd-logind[1477]: Removed session 19. Oct 9 01:03:04.147200 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 54394 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:03:04.156328 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:04.178005 systemd-logind[1477]: New session 20 of user core. Oct 9 01:03:04.194060 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:03:04.546273 sshd[5475]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:04.557442 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:54394.service: Deactivated successfully. Oct 9 01:03:04.559745 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:03:04.561360 systemd-logind[1477]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:03:04.562651 systemd-logind[1477]: Removed session 20. Oct 9 01:03:04.573199 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:54410.service - OpenSSH per-connection server daemon (10.0.0.1:54410). Oct 9 01:03:04.622218 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 54410 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:03:04.621529 sshd[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:04.637644 systemd-logind[1477]: New session 21 of user core. Oct 9 01:03:04.640363 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 01:03:05.015470 kubelet[2687]: I1009 01:03:05.013762 2687 topology_manager.go:215] "Topology Admit Handler" podUID="9e9170fd-caab-4af1-9fd6-35714b169d1a" podNamespace="calico-apiserver" podName="calico-apiserver-699c58bbcc-7gb42" Oct 9 01:03:05.028612 systemd[1]: Created slice kubepods-besteffort-pod9e9170fd_caab_4af1_9fd6_35714b169d1a.slice - libcontainer container kubepods-besteffort-pod9e9170fd_caab_4af1_9fd6_35714b169d1a.slice. Oct 9 01:03:05.061167 kubelet[2687]: I1009 01:03:05.061138 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e9170fd-caab-4af1-9fd6-35714b169d1a-calico-apiserver-certs\") pod \"calico-apiserver-699c58bbcc-7gb42\" (UID: \"9e9170fd-caab-4af1-9fd6-35714b169d1a\") " pod="calico-apiserver/calico-apiserver-699c58bbcc-7gb42" Oct 9 01:03:05.061284 kubelet[2687]: I1009 01:03:05.061169 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8ptl\" (UniqueName: \"kubernetes.io/projected/9e9170fd-caab-4af1-9fd6-35714b169d1a-kube-api-access-j8ptl\") pod \"calico-apiserver-699c58bbcc-7gb42\" (UID: \"9e9170fd-caab-4af1-9fd6-35714b169d1a\") " pod="calico-apiserver/calico-apiserver-699c58bbcc-7gb42" Oct 9 01:03:05.333199 containerd[1499]: time="2024-10-09T01:03:05.333138291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c58bbcc-7gb42,Uid:9e9170fd-caab-4af1-9fd6-35714b169d1a,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:03:05.501130 systemd-networkd[1414]: calie080ce3480b: Link UP Oct 9 01:03:05.505540 systemd-networkd[1414]: calie080ce3480b: Gained carrier Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.387 [INFO][5514] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0 calico-apiserver-699c58bbcc- calico-apiserver 9e9170fd-caab-4af1-9fd6-35714b169d1a 1141 0 2024-10-09 01:03:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699c58bbcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-699c58bbcc-7gb42 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie080ce3480b [] []}} ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Namespace="calico-apiserver" Pod="calico-apiserver-699c58bbcc-7gb42" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.387 [INFO][5514] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Namespace="calico-apiserver" Pod="calico-apiserver-699c58bbcc-7gb42" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.415 [INFO][5524] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" HandleID="k8s-pod-network.f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Workload="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.442 [INFO][5524] ipam_plugin.go 270: Auto assigning IP ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" HandleID="k8s-pod-network.f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Workload="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312e80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-699c58bbcc-7gb42", "timestamp":"2024-10-09 01:03:05.415796768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.442 [INFO][5524] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.442 [INFO][5524] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.442 [INFO][5524] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.445 [INFO][5524] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" host="localhost" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.451 [INFO][5524] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.459 [INFO][5524] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.464 [INFO][5524] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.467 [INFO][5524] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.467 [INFO][5524] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" host="localhost" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.469 [INFO][5524] ipam.go 1685: Creating new handle: k8s-pod-network.f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9 Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.476 [INFO][5524] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" host="localhost" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.486 [INFO][5524] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" host="localhost" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.487 [INFO][5524] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" host="localhost" Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.487 [INFO][5524] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:03:05.529705 containerd[1499]: 2024-10-09 01:03:05.487 [INFO][5524] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" HandleID="k8s-pod-network.f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Workload="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" Oct 9 01:03:05.531074 containerd[1499]: 2024-10-09 01:03:05.492 [INFO][5514] k8s.go 386: Populated endpoint ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Namespace="calico-apiserver" Pod="calico-apiserver-699c58bbcc-7gb42" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0", GenerateName:"calico-apiserver-699c58bbcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e9170fd-caab-4af1-9fd6-35714b169d1a", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c58bbcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-699c58bbcc-7gb42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie080ce3480b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:03:05.531074 containerd[1499]: 2024-10-09 01:03:05.492 [INFO][5514] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Namespace="calico-apiserver" Pod="calico-apiserver-699c58bbcc-7gb42" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" Oct 9 01:03:05.531074 containerd[1499]: 2024-10-09 01:03:05.492 [INFO][5514] dataplane_linux.go 68: Setting the host side veth name to calie080ce3480b ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Namespace="calico-apiserver" Pod="calico-apiserver-699c58bbcc-7gb42" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" Oct 9 01:03:05.531074 containerd[1499]: 2024-10-09 01:03:05.501 [INFO][5514] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Namespace="calico-apiserver" Pod="calico-apiserver-699c58bbcc-7gb42" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" Oct 9 01:03:05.531074 containerd[1499]: 2024-10-09 01:03:05.502 [INFO][5514] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Namespace="calico-apiserver" Pod="calico-apiserver-699c58bbcc-7gb42" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0", GenerateName:"calico-apiserver-699c58bbcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e9170fd-caab-4af1-9fd6-35714b169d1a", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c58bbcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9", Pod:"calico-apiserver-699c58bbcc-7gb42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie080ce3480b", MAC:"ea:4a:0f:8c:c3:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:03:05.531074 containerd[1499]: 2024-10-09 01:03:05.520 [INFO][5514] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9" Namespace="calico-apiserver" Pod="calico-apiserver-699c58bbcc-7gb42" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c58bbcc--7gb42-eth0" Oct 9 01:03:05.560858 containerd[1499]: time="2024-10-09T01:03:05.560749264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:03:05.560858 containerd[1499]: time="2024-10-09T01:03:05.560878373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:03:05.561197 containerd[1499]: time="2024-10-09T01:03:05.560905886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:03:05.561197 containerd[1499]: time="2024-10-09T01:03:05.561058430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:03:05.600537 systemd[1]: Started cri-containerd-f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9.scope - libcontainer container f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9. Oct 9 01:03:05.616578 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:03:05.646909 containerd[1499]: time="2024-10-09T01:03:05.646795221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c58bbcc-7gb42,Uid:9e9170fd-caab-4af1-9fd6-35714b169d1a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9\"" Oct 9 01:03:05.649387 containerd[1499]: time="2024-10-09T01:03:05.649120349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:03:06.208649 sshd[5487]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:06.219646 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:54410.service: Deactivated successfully. Oct 9 01:03:06.223020 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 01:03:06.225447 systemd-logind[1477]: Session 21 logged out. Waiting for processes to exit. Oct 9 01:03:06.230537 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:54416.service - OpenSSH per-connection server daemon (10.0.0.1:54416). Oct 9 01:03:06.233720 systemd-logind[1477]: Removed session 21. Oct 9 01:03:06.268014 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 54416 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:03:06.269723 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:06.273794 systemd-logind[1477]: New session 22 of user core. Oct 9 01:03:06.284400 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 01:03:06.501960 sshd[5597]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:06.512117 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:54416.service: Deactivated successfully. Oct 9 01:03:06.514412 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 01:03:06.516130 systemd-logind[1477]: Session 22 logged out. Waiting for processes to exit. Oct 9 01:03:06.522476 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:54418.service - OpenSSH per-connection server daemon (10.0.0.1:54418). Oct 9 01:03:06.523824 systemd-logind[1477]: Removed session 22. Oct 9 01:03:06.555566 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 54418 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:03:06.557221 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:06.561067 systemd-logind[1477]: New session 23 of user core. Oct 9 01:03:06.565368 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 01:03:06.674323 sshd[5609]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:06.678465 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:54418.service: Deactivated successfully. Oct 9 01:03:06.680675 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 01:03:06.681483 systemd-logind[1477]: Session 23 logged out. Waiting for processes to exit. Oct 9 01:03:06.682850 systemd-logind[1477]: Removed session 23. Oct 9 01:03:07.398379 systemd-networkd[1414]: calie080ce3480b: Gained IPv6LL Oct 9 01:03:07.736124 containerd[1499]: time="2024-10-09T01:03:07.735993281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:07.737038 containerd[1499]: time="2024-10-09T01:03:07.736996455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 01:03:07.738260 containerd[1499]: time="2024-10-09T01:03:07.738235664Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:07.740836 containerd[1499]: time="2024-10-09T01:03:07.740791009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:07.741523 containerd[1499]: time="2024-10-09T01:03:07.741494565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.092346382s" Oct 9 01:03:07.741559 containerd[1499]: time="2024-10-09T01:03:07.741523981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 01:03:07.744851 containerd[1499]: time="2024-10-09T01:03:07.744806578Z" level=info msg="CreateContainer within sandbox \"f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:03:07.758821 containerd[1499]: time="2024-10-09T01:03:07.758761064Z" level=info msg="CreateContainer within sandbox \"f1182b0ad5b7cad4d2b91505aaf81565e9ce60cb5dbeb48b41b8d8be6f3837b9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e207ab1fb93fb1a90e6a92ad31a81c4e65b016335cd155ad267fadeca30f2cea\"" Oct 9 01:03:07.759361 containerd[1499]: time="2024-10-09T01:03:07.759318638Z" level=info msg="StartContainer for \"e207ab1fb93fb1a90e6a92ad31a81c4e65b016335cd155ad267fadeca30f2cea\"" Oct 9 01:03:07.803336 systemd[1]: Started cri-containerd-e207ab1fb93fb1a90e6a92ad31a81c4e65b016335cd155ad267fadeca30f2cea.scope - libcontainer container e207ab1fb93fb1a90e6a92ad31a81c4e65b016335cd155ad267fadeca30f2cea. Oct 9 01:03:07.867443 containerd[1499]: time="2024-10-09T01:03:07.867372858Z" level=info msg="StartContainer for \"e207ab1fb93fb1a90e6a92ad31a81c4e65b016335cd155ad267fadeca30f2cea\" returns successfully" Oct 9 01:03:08.851857 kubelet[2687]: I1009 01:03:08.851805 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-699c58bbcc-7gb42" podStartSLOduration=2.758208842 podStartE2EDuration="4.851789975s" podCreationTimestamp="2024-10-09 01:03:04 +0000 UTC" firstStartedPulling="2024-10-09 01:03:05.64872209 +0000 UTC m=+67.252613478" lastFinishedPulling="2024-10-09 01:03:07.742303223 +0000 UTC m=+69.346194611" observedRunningTime="2024-10-09 01:03:08.850443363 +0000 UTC m=+70.454334751" watchObservedRunningTime="2024-10-09 01:03:08.851789975 +0000 UTC m=+70.455681363" Oct 9 01:03:11.687318 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:38092.service - OpenSSH per-connection server daemon (10.0.0.1:38092). Oct 9 01:03:11.726085 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 38092 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:03:11.727911 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:11.732239 systemd-logind[1477]: New session 24 of user core. Oct 9 01:03:11.740320 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 01:03:11.856753 sshd[5682]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:11.860257 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:38092.service: Deactivated successfully. Oct 9 01:03:11.862091 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 01:03:11.862912 systemd-logind[1477]: Session 24 logged out. Waiting for processes to exit. Oct 9 01:03:11.863708 systemd-logind[1477]: Removed session 24. Oct 9 01:03:16.868934 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:38096.service - OpenSSH per-connection server daemon (10.0.0.1:38096). Oct 9 01:03:16.920078 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 38096 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:03:16.921829 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:16.925929 systemd-logind[1477]: New session 25 of user core. Oct 9 01:03:16.931311 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 01:03:17.049846 sshd[5708]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:17.055023 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:38096.service: Deactivated successfully. Oct 9 01:03:17.057478 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 01:03:17.058192 systemd-logind[1477]: Session 25 logged out. Waiting for processes to exit. Oct 9 01:03:17.059177 systemd-logind[1477]: Removed session 25. Oct 9 01:03:21.472392 kubelet[2687]: E1009 01:03:21.472345 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:03:22.062408 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:46064.service - OpenSSH per-connection server daemon (10.0.0.1:46064). Oct 9 01:03:22.103231 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 46064 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:03:22.105077 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:22.109383 systemd-logind[1477]: New session 26 of user core. Oct 9 01:03:22.118376 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 01:03:22.236228 sshd[5723]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:22.240984 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:46064.service: Deactivated successfully. Oct 9 01:03:22.242961 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 01:03:22.243728 systemd-logind[1477]: Session 26 logged out. Waiting for processes to exit. Oct 9 01:03:22.244893 systemd-logind[1477]: Removed session 26. Oct 9 01:03:22.472619 kubelet[2687]: E1009 01:03:22.472588 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:03:27.248374 systemd[1]: Started sshd@26-10.0.0.81:22-10.0.0.1:46324.service - OpenSSH per-connection server daemon (10.0.0.1:46324). Oct 9 01:03:27.284125 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 46324 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:03:27.285793 sshd[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:27.289371 systemd-logind[1477]: New session 27 of user core. Oct 9 01:03:27.298305 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 01:03:27.415239 sshd[5751]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:27.419381 systemd[1]: sshd@26-10.0.0.81:22-10.0.0.1:46324.service: Deactivated successfully. Oct 9 01:03:27.421620 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 01:03:27.422392 systemd-logind[1477]: Session 27 logged out. Waiting for processes to exit. Oct 9 01:03:27.423247 systemd-logind[1477]: Removed session 27. Oct 9 01:03:28.472570 kubelet[2687]: E1009 01:03:28.472531 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"