Nov 12 20:58:25.888685 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:58:25.888708 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:58:25.888720 kernel: BIOS-provided physical RAM map: Nov 12 20:58:25.888726 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:58:25.888732 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 20:58:25.888738 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 20:58:25.888745 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 20:58:25.888752 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 20:58:25.888758 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 12 20:58:25.888764 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 12 20:58:25.888774 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 12 20:58:25.888782 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 12 20:58:25.888790 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 12 20:58:25.888798 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 12 20:58:25.888808 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 12 20:58:25.888817 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 20:58:25.888832 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 12 20:58:25.888844 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 12 20:58:25.888853 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 20:58:25.888862 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:58:25.888871 kernel: NX (Execute Disable) protection: active Nov 12 20:58:25.888879 kernel: APIC: Static calls initialized Nov 12 20:58:25.888889 kernel: efi: EFI v2.7 by EDK II Nov 12 20:58:25.888896 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Nov 12 20:58:25.888904 kernel: SMBIOS 2.8 present. Nov 12 20:58:25.888918 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 12 20:58:25.888929 kernel: Hypervisor detected: KVM Nov 12 20:58:25.888944 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:58:25.888951 kernel: kvm-clock: using sched offset of 4178117279 cycles Nov 12 20:58:25.888958 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:58:25.888965 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:58:25.888972 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:58:25.888979 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:58:25.888986 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 12 20:58:25.888993 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:58:25.889002 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:58:25.889014 kernel: Using GB pages for direct mapping Nov 12 20:58:25.889024 kernel: Secure boot disabled Nov 12 20:58:25.889033 kernel: ACPI: Early table checksum verification disabled Nov 12 20:58:25.889042 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 12 20:58:25.889054 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 12 20:58:25.889061 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:58:25.889068 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:58:25.889078 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 12 20:58:25.889085 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:58:25.889093 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:58:25.889103 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:58:25.889113 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:58:25.889122 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 12 20:58:25.889132 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 12 20:58:25.889144 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Nov 12 20:58:25.889151 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 12 20:58:25.889158 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 12 20:58:25.889167 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 12 20:58:25.889183 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 12 20:58:25.889204 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 12 20:58:25.889214 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 12 20:58:25.889223 kernel: No NUMA configuration found Nov 12 20:58:25.889233 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 12 20:58:25.889244 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 12 20:58:25.889251 kernel: Zone ranges: Nov 12 20:58:25.889258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:58:25.889265 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 12 20:58:25.889272 kernel: Normal empty Nov 12 20:58:25.889279 kernel: Movable zone start for each node Nov 12 20:58:25.889286 kernel: Early memory node ranges Nov 12 20:58:25.889293 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:58:25.889300 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 12 20:58:25.889307 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 12 20:58:25.889317 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 12 20:58:25.889324 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 12 20:58:25.889331 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 12 20:58:25.889338 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 12 20:58:25.889345 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:58:25.889352 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:58:25.889359 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 12 20:58:25.889366 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:58:25.889373 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 12 20:58:25.889382 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 12 20:58:25.889390 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 12 20:58:25.889397 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:58:25.889404 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:58:25.889411 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:58:25.889418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:58:25.889425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:58:25.889432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:58:25.889439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:58:25.889446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:58:25.889455 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:58:25.889463 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:58:25.889470 kernel: TSC deadline timer available Nov 12 20:58:25.889477 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:58:25.889484 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:58:25.889491 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:58:25.889498 kernel: kvm-guest: setup PV sched yield Nov 12 20:58:25.889505 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:58:25.889512 kernel: Booting paravirtualized kernel on KVM Nov 12 20:58:25.889522 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:58:25.889529 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:58:25.889536 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:58:25.889543 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:58:25.889550 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:58:25.889557 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:58:25.889564 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:58:25.889572 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:58:25.889582 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:58:25.889590 kernel: random: crng init done Nov 12 20:58:25.889597 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:58:25.889604 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:58:25.889611 kernel: Fallback order for Node 0: 0 Nov 12 20:58:25.889618 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 12 20:58:25.889626 kernel: Policy zone: DMA32 Nov 12 20:58:25.889633 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:58:25.889640 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 171124K reserved, 0K cma-reserved) Nov 12 20:58:25.889650 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:58:25.889657 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:58:25.889664 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:58:25.889671 kernel: Dynamic Preempt: voluntary Nov 12 20:58:25.889702 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:58:25.889716 kernel: rcu: RCU event tracing is enabled. Nov 12 20:58:25.889725 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:58:25.889735 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:58:25.889746 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:58:25.889754 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:58:25.889762 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:58:25.889769 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:58:25.889779 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:58:25.889787 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:58:25.889794 kernel: Console: colour dummy device 80x25 Nov 12 20:58:25.889802 kernel: printk: console [ttyS0] enabled Nov 12 20:58:25.889809 kernel: ACPI: Core revision 20230628 Nov 12 20:58:25.889819 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:58:25.889827 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:58:25.889834 kernel: x2apic enabled Nov 12 20:58:25.889842 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:58:25.889849 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:58:25.889857 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:58:25.889864 kernel: kvm-guest: setup PV IPIs Nov 12 20:58:25.889872 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:58:25.889879 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:58:25.889889 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:58:25.889896 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:58:25.889904 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:58:25.889911 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:58:25.889919 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:58:25.889926 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:58:25.889934 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:58:25.889942 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:58:25.889949 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:58:25.889959 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:58:25.889966 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:58:25.889974 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:58:25.889981 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:58:25.889990 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:58:25.889997 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:58:25.890005 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:58:25.890012 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:58:25.890022 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:58:25.890029 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:58:25.890037 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:58:25.890045 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:58:25.890052 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:58:25.890059 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:58:25.890067 kernel: landlock: Up and running. Nov 12 20:58:25.890074 kernel: SELinux: Initializing. Nov 12 20:58:25.890082 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:58:25.890091 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:58:25.890099 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:58:25.890107 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:58:25.890115 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:58:25.890122 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:58:25.890130 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:58:25.890137 kernel: ... version: 0 Nov 12 20:58:25.890145 kernel: ... bit width: 48 Nov 12 20:58:25.890152 kernel: ... generic registers: 6 Nov 12 20:58:25.890162 kernel: ... value mask: 0000ffffffffffff Nov 12 20:58:25.890169 kernel: ... max period: 00007fffffffffff Nov 12 20:58:25.890176 kernel: ... fixed-purpose events: 0 Nov 12 20:58:25.890184 kernel: ... event mask: 000000000000003f Nov 12 20:58:25.890199 kernel: signal: max sigframe size: 1776 Nov 12 20:58:25.890206 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:58:25.890214 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:58:25.890221 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:58:25.890229 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:58:25.890239 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:58:25.890246 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:58:25.890254 kernel: smpboot: Max logical packages: 1 Nov 12 20:58:25.890263 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:58:25.890273 kernel: devtmpfs: initialized Nov 12 20:58:25.890284 kernel: x86/mm: Memory block size: 128MB Nov 12 20:58:25.890294 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 12 20:58:25.890304 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 12 20:58:25.890314 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 12 20:58:25.890327 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 12 20:58:25.890337 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 12 20:58:25.890347 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:58:25.890357 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:58:25.890367 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:58:25.890378 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:58:25.890387 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:58:25.890395 kernel: audit: type=2000 audit(1731445105.356:1): state=initialized audit_enabled=0 res=1 Nov 12 20:58:25.890402 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:58:25.890413 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:58:25.890420 kernel: cpuidle: using governor menu Nov 12 20:58:25.890428 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:58:25.890436 kernel: dca service started, version 1.12.1 Nov 12 20:58:25.890443 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:58:25.890451 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:58:25.890458 kernel: PCI: Using configuration type 1 for base access Nov 12 20:58:25.890466 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:58:25.890473 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:58:25.890483 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:58:25.890491 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:58:25.890498 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:58:25.890506 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:58:25.890515 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:58:25.890523 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:58:25.890532 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:58:25.890541 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:58:25.890548 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:58:25.890558 kernel: ACPI: Interpreter enabled Nov 12 20:58:25.890567 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:58:25.890578 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:58:25.890585 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:58:25.890593 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:58:25.890600 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:58:25.890608 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:58:25.890840 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:58:25.890979 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:58:25.891100 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:58:25.891111 kernel: PCI host bridge to bus 0000:00 Nov 12 20:58:25.891246 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:58:25.891358 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:58:25.891483 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:58:25.891597 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:58:25.891733 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:58:25.892962 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 12 20:58:25.893079 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:58:25.893226 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:58:25.893358 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:58:25.893480 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 12 20:58:25.893605 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 12 20:58:25.893756 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 12 20:58:25.893886 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 12 20:58:25.894005 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:58:25.894153 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:58:25.894283 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 12 20:58:25.894404 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 12 20:58:25.894529 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 12 20:58:25.894663 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:58:25.894816 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 12 20:58:25.894940 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 12 20:58:25.895060 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 12 20:58:25.895199 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:58:25.895335 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 12 20:58:25.895465 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 12 20:58:25.895589 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 12 20:58:25.895726 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 12 20:58:25.895859 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:58:25.895978 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:58:25.896105 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:58:25.896240 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 12 20:58:25.896360 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 12 20:58:25.896487 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:58:25.896607 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 12 20:58:25.896618 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:58:25.896626 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:58:25.896633 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:58:25.896641 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:58:25.896652 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:58:25.896659 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:58:25.896667 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:58:25.896691 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:58:25.896698 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:58:25.896706 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:58:25.896713 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:58:25.896721 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:58:25.896728 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:58:25.896738 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:58:25.896746 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:58:25.896753 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:58:25.896761 kernel: iommu: Default domain type: Translated Nov 12 20:58:25.896768 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:58:25.896776 kernel: efivars: Registered efivars operations Nov 12 20:58:25.896783 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:58:25.896791 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:58:25.896798 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 12 20:58:25.896808 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 12 20:58:25.896815 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 12 20:58:25.896823 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 12 20:58:25.896944 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:58:25.897062 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:58:25.897181 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:58:25.897198 kernel: vgaarb: loaded Nov 12 20:58:25.897205 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:58:25.897213 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:58:25.897224 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:58:25.897232 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:58:25.897239 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:58:25.897247 kernel: pnp: PnP ACPI init Nov 12 20:58:25.897390 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:58:25.897403 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:58:25.897411 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:58:25.897419 kernel: NET: Registered PF_INET protocol family Nov 12 20:58:25.897430 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:58:25.897437 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:58:25.897445 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:58:25.897453 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:58:25.897460 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:58:25.897468 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:58:25.897475 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:58:25.897483 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:58:25.897490 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:58:25.897502 kernel: NET: Registered PF_XDP protocol family Nov 12 20:58:25.897666 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 12 20:58:25.897871 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 12 20:58:25.898010 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:58:25.898137 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:58:25.898258 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:58:25.898369 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:58:25.898478 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:58:25.898593 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 12 20:58:25.898603 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:58:25.898611 kernel: Initialise system trusted keyrings Nov 12 20:58:25.898619 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:58:25.898626 kernel: Key type asymmetric registered Nov 12 20:58:25.898634 kernel: Asymmetric key parser 'x509' registered Nov 12 20:58:25.898641 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:58:25.898649 kernel: io scheduler mq-deadline registered Nov 12 20:58:25.898660 kernel: io scheduler kyber registered Nov 12 20:58:25.898667 kernel: io scheduler bfq registered Nov 12 20:58:25.898687 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:58:25.898696 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:58:25.898704 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:58:25.898711 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:58:25.898719 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:58:25.898726 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:58:25.898734 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:58:25.898741 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:58:25.898752 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:58:25.898883 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:58:25.898894 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:58:25.899006 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:58:25.899119 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:58:25 UTC (1731445105) Nov 12 20:58:25.899239 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:58:25.899250 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:58:25.899260 kernel: efifb: probing for efifb Nov 12 20:58:25.899268 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 12 20:58:25.899276 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 12 20:58:25.899283 kernel: efifb: scrolling: redraw Nov 12 20:58:25.899291 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 12 20:58:25.899298 kernel: Console: switching to colour frame buffer device 100x37 Nov 12 20:58:25.899323 kernel: fb0: EFI VGA frame buffer device Nov 12 20:58:25.899333 kernel: pstore: Using crash dump compression: deflate Nov 12 20:58:25.899341 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:58:25.899351 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:58:25.899359 kernel: Segment Routing with IPv6 Nov 12 20:58:25.899367 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:58:25.899375 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:58:25.899382 kernel: Key type dns_resolver registered Nov 12 20:58:25.899390 kernel: IPI shorthand broadcast: enabled Nov 12 20:58:25.899398 kernel: sched_clock: Marking stable (593002043, 119600305)->(761328571, -48726223) Nov 12 20:58:25.899405 kernel: registered taskstats version 1 Nov 12 20:58:25.899413 kernel: Loading compiled-in X.509 certificates Nov 12 20:58:25.899421 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:58:25.899431 kernel: Key type .fscrypt registered Nov 12 20:58:25.899438 kernel: Key type fscrypt-provisioning registered Nov 12 20:58:25.899446 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:58:25.899454 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:58:25.899462 kernel: ima: No architecture policies found Nov 12 20:58:25.899469 kernel: clk: Disabling unused clocks Nov 12 20:58:25.899477 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:58:25.899485 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:58:25.899495 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:58:25.899503 kernel: Run /init as init process Nov 12 20:58:25.899510 kernel: with arguments: Nov 12 20:58:25.899518 kernel: /init Nov 12 20:58:25.899528 kernel: with environment: Nov 12 20:58:25.899536 kernel: HOME=/ Nov 12 20:58:25.899543 kernel: TERM=linux Nov 12 20:58:25.899551 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:58:25.899561 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:58:25.899573 systemd[1]: Detected virtualization kvm. Nov 12 20:58:25.899582 systemd[1]: Detected architecture x86-64. Nov 12 20:58:25.899590 systemd[1]: Running in initrd. Nov 12 20:58:25.899600 systemd[1]: No hostname configured, using default hostname. Nov 12 20:58:25.899610 systemd[1]: Hostname set to . Nov 12 20:58:25.899619 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:58:25.899627 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:58:25.899635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:58:25.899643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:58:25.899652 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:58:25.899660 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:58:25.899669 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:58:25.899693 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:58:25.899702 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:58:25.899711 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:58:25.899719 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:58:25.899727 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:58:25.899735 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:58:25.899744 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:58:25.899754 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:58:25.899763 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:58:25.899771 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:58:25.899779 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:58:25.899788 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:58:25.899796 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:58:25.899805 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:58:25.899813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:58:25.899823 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:58:25.899832 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:58:25.899840 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:58:25.899848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:58:25.899857 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:58:25.899865 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:58:25.899873 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:58:25.899882 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:58:25.899890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:58:25.899901 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:58:25.899909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:58:25.899917 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:58:25.899926 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:58:25.899937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:58:25.899945 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:58:25.899972 systemd-journald[192]: Collecting audit messages is disabled. Nov 12 20:58:25.899997 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:58:25.900013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:58:25.900024 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:58:25.900035 systemd-journald[192]: Journal started Nov 12 20:58:25.900059 systemd-journald[192]: Runtime Journal (/run/log/journal/c275d29670324f66bd492b63fe1efadb) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:58:25.880801 systemd-modules-load[194]: Inserted module 'overlay' Nov 12 20:58:25.905038 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:58:25.912700 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:58:25.914797 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 12 20:58:25.915759 kernel: Bridge firewalling registered Nov 12 20:58:25.924955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:58:25.926624 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:58:25.928950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:58:25.937367 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:58:25.941364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:58:25.942991 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:58:25.949998 dracut-cmdline[221]: dracut-dracut-053 Nov 12 20:58:25.952551 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:58:25.959404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:58:25.964826 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:58:25.994969 systemd-resolved[246]: Positive Trust Anchors: Nov 12 20:58:25.994983 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:58:25.995013 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:58:25.997432 systemd-resolved[246]: Defaulting to hostname 'linux'. Nov 12 20:58:25.998460 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:58:26.004721 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:58:26.038709 kernel: SCSI subsystem initialized Nov 12 20:58:26.047700 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:58:26.068700 kernel: iscsi: registered transport (tcp) Nov 12 20:58:26.089703 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:58:26.089727 kernel: QLogic iSCSI HBA Driver Nov 12 20:58:26.137931 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:58:26.156019 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:58:26.183472 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:58:26.183513 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:58:26.183539 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:58:26.225728 kernel: raid6: avx2x4 gen() 29321 MB/s Nov 12 20:58:26.242708 kernel: raid6: avx2x2 gen() 29917 MB/s Nov 12 20:58:26.260056 kernel: raid6: avx2x1 gen() 21029 MB/s Nov 12 20:58:26.260087 kernel: raid6: using algorithm avx2x2 gen() 29917 MB/s Nov 12 20:58:26.278064 kernel: raid6: .... xor() 14438 MB/s, rmw enabled Nov 12 20:58:26.278091 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:58:26.298723 kernel: xor: automatically using best checksumming function avx Nov 12 20:58:26.454725 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:58:26.468222 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:58:26.479889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:58:26.491434 systemd-udevd[412]: Using default interface naming scheme 'v255'. Nov 12 20:58:26.495585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:58:26.508823 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:58:26.524729 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Nov 12 20:58:26.558365 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:58:26.572867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:58:26.640514 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:58:26.650857 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:58:26.667696 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:58:26.727230 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:58:26.727497 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:58:26.727511 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:58:26.727529 kernel: GPT:9289727 != 19775487 Nov 12 20:58:26.727539 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:58:26.727550 kernel: GPT:9289727 != 19775487 Nov 12 20:58:26.727559 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:58:26.727569 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:58:26.727580 kernel: libata version 3.00 loaded. Nov 12 20:58:26.727590 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:58:26.727600 kernel: AES CTR mode by8 optimization enabled Nov 12 20:58:26.727610 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:58:26.749824 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:58:26.749843 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:58:26.750016 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:58:26.750185 kernel: scsi host0: ahci Nov 12 20:58:26.750358 kernel: scsi host1: ahci Nov 12 20:58:26.750519 kernel: scsi host2: ahci Nov 12 20:58:26.750669 kernel: scsi host3: ahci Nov 12 20:58:26.750841 kernel: scsi host4: ahci Nov 12 20:58:26.750989 kernel: scsi host5: ahci Nov 12 20:58:26.751130 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 12 20:58:26.751141 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 12 20:58:26.751153 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 12 20:58:26.751164 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 12 20:58:26.751184 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (458) Nov 12 20:58:26.751198 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 12 20:58:26.751208 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 12 20:58:26.667904 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:58:26.670513 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:58:26.673896 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:58:26.756809 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (475) Nov 12 20:58:26.675148 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:58:26.686841 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:58:26.704425 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:58:26.704624 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:58:26.727774 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:58:26.729192 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:58:26.729364 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:58:26.730796 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:58:26.740129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:58:26.747599 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:58:26.770213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:58:26.783093 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:58:26.797938 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:58:26.801284 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:58:26.808164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:58:26.815173 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:58:26.841824 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:58:26.844180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:58:26.844238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:58:26.847633 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:58:26.850492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:58:26.854562 disk-uuid[567]: Primary Header is updated. Nov 12 20:58:26.854562 disk-uuid[567]: Secondary Entries is updated. Nov 12 20:58:26.854562 disk-uuid[567]: Secondary Header is updated. Nov 12 20:58:26.860766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:58:26.864701 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:58:26.872664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:58:26.880844 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:58:26.906884 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:58:27.056715 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:58:27.056796 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:58:27.064697 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:58:27.064729 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:58:27.065704 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:58:27.065717 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:58:27.066712 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:58:27.068177 kernel: ata3.00: applying bridge limits Nov 12 20:58:27.068191 kernel: ata3.00: configured for UDMA/100 Nov 12 20:58:27.068708 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:58:27.113708 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:58:27.131493 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:58:27.131514 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:58:27.865698 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:58:27.866242 disk-uuid[569]: The operation has completed successfully. Nov 12 20:58:27.889492 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:58:27.889612 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:58:27.915820 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:58:27.918702 sh[597]: Success Nov 12 20:58:27.930728 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:58:27.961756 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:58:27.976061 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:58:27.978658 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:58:27.990223 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:58:27.990257 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:58:27.990268 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:58:27.991990 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:58:27.992004 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:58:27.997240 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:58:27.997667 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:58:28.008904 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:58:28.011519 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:58:28.019730 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:58:28.019761 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:58:28.019776 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:58:28.023708 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:58:28.032062 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:58:28.033853 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:58:28.041721 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:58:28.050857 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:58:28.099753 ignition[688]: Ignition 2.19.0 Nov 12 20:58:28.099768 ignition[688]: Stage: fetch-offline Nov 12 20:58:28.099803 ignition[688]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:58:28.099813 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:58:28.099910 ignition[688]: parsed url from cmdline: "" Nov 12 20:58:28.099914 ignition[688]: no config URL provided Nov 12 20:58:28.099920 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:58:28.099929 ignition[688]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:58:28.099954 ignition[688]: op(1): [started] loading QEMU firmware config module Nov 12 20:58:28.099959 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:58:28.107553 ignition[688]: op(1): [finished] loading QEMU firmware config module Nov 12 20:58:28.126026 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:58:28.139870 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:58:28.155693 ignition[688]: parsing config with SHA512: 24083a6d6d154453c4addb1aec4a8fb25cbce374eafb9f38e21174beb95f52a754370d5e5a1e2a07130c1927e35d6ba7448fc517eafee72f03653b2717bd4f44 Nov 12 20:58:28.159490 unknown[688]: fetched base config from "system" Nov 12 20:58:28.159504 unknown[688]: fetched user config from "qemu" Nov 12 20:58:28.161688 ignition[688]: fetch-offline: fetch-offline passed Nov 12 20:58:28.161775 ignition[688]: Ignition finished successfully Nov 12 20:58:28.164078 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:58:28.167818 systemd-networkd[785]: lo: Link UP Nov 12 20:58:28.167829 systemd-networkd[785]: lo: Gained carrier Nov 12 20:58:28.170670 systemd-networkd[785]: Enumeration completed Nov 12 20:58:28.170796 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:58:28.171406 systemd[1]: Reached target network.target - Network. Nov 12 20:58:28.171691 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:58:28.177094 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:58:28.177104 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:58:28.177812 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:58:28.182881 systemd-networkd[785]: eth0: Link UP Nov 12 20:58:28.182892 systemd-networkd[785]: eth0: Gained carrier Nov 12 20:58:28.182898 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:58:28.190717 ignition[789]: Ignition 2.19.0 Nov 12 20:58:28.190735 ignition[789]: Stage: kargs Nov 12 20:58:28.190930 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:58:28.190941 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:58:28.191684 ignition[789]: kargs: kargs passed Nov 12 20:58:28.191724 ignition[789]: Ignition finished successfully Nov 12 20:58:28.198377 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:58:28.199728 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:58:28.208826 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:58:28.219985 ignition[797]: Ignition 2.19.0 Nov 12 20:58:28.219995 ignition[797]: Stage: disks Nov 12 20:58:28.220152 ignition[797]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:58:28.220164 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:58:28.223039 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:58:28.220972 ignition[797]: disks: disks passed Nov 12 20:58:28.224609 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:58:28.221013 ignition[797]: Ignition finished successfully Nov 12 20:58:28.226489 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:58:28.228445 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:58:28.230499 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:58:28.230905 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:58:28.238797 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:58:28.251301 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:58:28.257468 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:58:28.263814 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:58:28.347699 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:58:28.348346 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:58:28.349189 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:58:28.357755 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:58:28.359622 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:58:28.361979 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:58:28.362026 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:58:28.362053 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:58:28.370971 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Nov 12 20:58:28.370989 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:58:28.370999 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:58:28.371009 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:58:28.373698 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:58:28.375648 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:58:28.388710 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:58:28.390408 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:58:28.424509 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:58:28.428619 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:58:28.433350 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:58:28.437884 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:58:28.514528 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:58:28.530776 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:58:28.532577 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:58:28.538701 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:58:28.556012 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:58:28.560980 ignition[932]: INFO : Ignition 2.19.0 Nov 12 20:58:28.560980 ignition[932]: INFO : Stage: mount Nov 12 20:58:28.562825 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:58:28.562825 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:58:28.562825 ignition[932]: INFO : mount: mount passed Nov 12 20:58:28.562825 ignition[932]: INFO : Ignition finished successfully Nov 12 20:58:28.564013 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:58:28.570773 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:58:28.989564 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:58:29.002860 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:58:29.008696 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) Nov 12 20:58:29.010826 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:58:29.010845 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:58:29.010855 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:58:29.013698 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:58:29.015055 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:58:29.037539 ignition[961]: INFO : Ignition 2.19.0 Nov 12 20:58:29.037539 ignition[961]: INFO : Stage: files Nov 12 20:58:29.039340 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:58:29.039340 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:58:29.039340 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:58:29.043054 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:58:29.043054 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:58:29.043054 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:58:29.043054 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:58:29.043054 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:58:29.042149 unknown[961]: wrote ssh authorized keys file for user: core Nov 12 20:58:29.050969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:58:29.050969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:58:29.080067 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:58:29.161893 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:58:29.164084 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:58:29.281810 systemd-networkd[785]: eth0: Gained IPv6LL Nov 12 20:58:29.509779 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:58:30.069120 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:58:30.069120 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:58:30.073263 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:58:30.073263 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:58:30.073263 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:58:30.073263 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 12 20:58:30.073263 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:58:30.073263 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:58:30.073263 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 12 20:58:30.073263 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:58:30.091415 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:58:30.096172 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:58:30.097727 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:58:30.097727 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:58:30.097727 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:58:30.097727 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:58:30.097727 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:58:30.097727 ignition[961]: INFO : files: files passed Nov 12 20:58:30.097727 ignition[961]: INFO : Ignition finished successfully Nov 12 20:58:30.099240 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:58:30.115821 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:58:30.117559 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:58:30.119407 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:58:30.119511 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:58:30.127648 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:58:30.130559 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:58:30.130559 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:58:30.133621 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:58:30.133493 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:58:30.135176 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:58:30.146811 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:58:30.170148 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:58:30.170276 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:58:30.172619 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:58:30.174660 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:58:30.176720 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:58:30.186813 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:58:30.202852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:58:30.211042 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:58:30.220297 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:58:30.222672 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:58:30.223973 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:58:30.225909 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:58:30.226036 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:58:30.228185 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:58:30.229939 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:58:30.231944 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:58:30.234095 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:58:30.236116 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:58:30.238252 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:58:30.240377 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:58:30.242687 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:58:30.244668 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:58:30.246875 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:58:30.248640 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:58:30.248830 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:58:30.250890 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:58:30.252443 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:58:30.254519 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:58:30.254670 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:58:30.256715 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:58:30.256845 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:58:30.259032 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:58:30.259188 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:58:30.261163 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:58:30.262891 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:58:30.267805 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:58:30.269614 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:58:30.271275 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:58:30.273267 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:58:30.273370 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:58:30.275661 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:58:30.275763 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:58:30.277508 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:58:30.277622 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:58:30.279600 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:58:30.279716 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:58:30.290822 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:58:30.292446 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:58:30.293605 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:58:30.293773 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:58:30.295893 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:58:30.296063 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:58:30.302073 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:58:30.302208 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:58:30.305842 ignition[1016]: INFO : Ignition 2.19.0 Nov 12 20:58:30.305842 ignition[1016]: INFO : Stage: umount Nov 12 20:58:30.307632 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:58:30.307632 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:58:30.309870 ignition[1016]: INFO : umount: umount passed Nov 12 20:58:30.309870 ignition[1016]: INFO : Ignition finished successfully Nov 12 20:58:30.310173 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:58:30.310313 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:58:30.311889 systemd[1]: Stopped target network.target - Network. Nov 12 20:58:30.313480 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:58:30.313536 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:58:30.315504 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:58:30.315551 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:58:30.317450 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:58:30.317495 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:58:30.319308 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:58:30.319355 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:58:30.321415 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:58:30.323448 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:58:30.326404 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:58:30.330718 systemd-networkd[785]: eth0: DHCPv6 lease lost Nov 12 20:58:30.333839 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:58:30.333987 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:58:30.336099 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:58:30.336139 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:58:30.342802 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:58:30.343268 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:58:30.343328 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:58:30.343756 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:58:30.344351 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:58:30.344467 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:58:30.348905 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:58:30.349008 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:58:30.350870 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:58:30.350931 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:58:30.352466 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:58:30.352523 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:58:30.358427 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:58:30.358568 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:58:30.374450 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:58:30.374640 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:58:30.376947 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:58:30.376999 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:58:30.378966 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:58:30.379017 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:58:30.381050 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:58:30.381108 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:58:30.383342 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:58:30.383390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:58:30.385293 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:58:30.385341 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:58:30.394831 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:58:30.395325 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:58:30.395387 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:58:30.395707 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:58:30.395765 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:58:30.396026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:58:30.396096 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:58:30.396359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:58:30.396415 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:58:30.402111 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:58:30.402221 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:58:30.525767 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:58:30.525904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:58:30.527997 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:58:30.529751 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:58:30.529804 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:58:30.544827 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:58:30.551697 systemd[1]: Switching root. Nov 12 20:58:30.576115 systemd-journald[192]: Journal stopped Nov 12 20:58:31.779664 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 12 20:58:31.780423 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:58:31.780446 kernel: SELinux: policy capability open_perms=1 Nov 12 20:58:31.780462 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:58:31.780477 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:58:31.780493 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:58:31.780508 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:58:31.780524 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:58:31.780540 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:58:31.780558 kernel: audit: type=1403 audit(1731445111.020:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:58:31.780576 systemd[1]: Successfully loaded SELinux policy in 37.904ms. Nov 12 20:58:31.780608 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.758ms. Nov 12 20:58:31.780626 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:58:31.780642 systemd[1]: Detected virtualization kvm. Nov 12 20:58:31.780657 systemd[1]: Detected architecture x86-64. Nov 12 20:58:31.780671 systemd[1]: Detected first boot. Nov 12 20:58:31.780701 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:58:31.780719 zram_generator::config[1061]: No configuration found. Nov 12 20:58:31.780741 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:58:31.780757 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:58:31.780776 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:58:31.780791 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:58:31.780806 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:58:31.780828 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:58:31.780843 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:58:31.780860 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:58:31.780879 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:58:31.780895 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:58:31.780911 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:58:31.780927 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:58:31.780943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:58:31.780959 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:58:31.780973 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:58:31.780988 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:58:31.781003 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:58:31.781022 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:58:31.781037 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:58:31.781061 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:58:31.781076 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:58:31.781091 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:58:31.781107 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:58:31.781126 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:58:31.781145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:58:31.781167 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:58:31.781183 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:58:31.781199 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:58:31.781215 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:58:31.781230 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:58:31.781246 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:58:31.781262 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:58:31.781284 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:58:31.781299 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:58:31.781319 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:58:31.781335 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:58:31.781351 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:58:31.781367 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:58:31.781383 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:58:31.781399 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:58:31.781415 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:58:31.781431 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:58:31.781450 systemd[1]: Reached target machines.target - Containers. Nov 12 20:58:31.781466 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:58:31.781484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:58:31.781500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:58:31.781516 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:58:31.781533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:58:31.781548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:58:31.781564 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:58:31.781579 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:58:31.781599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:58:31.781615 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:58:31.781631 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:58:31.781650 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:58:31.781668 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:58:31.781712 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:58:31.781729 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:58:31.781744 kernel: loop: module loaded Nov 12 20:58:31.781763 kernel: fuse: init (API version 7.39) Nov 12 20:58:31.781780 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:58:31.781796 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:58:31.781811 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:58:31.781827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:58:31.781843 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:58:31.781859 systemd[1]: Stopped verity-setup.service. Nov 12 20:58:31.781875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:58:31.781918 systemd-journald[1133]: Collecting audit messages is disabled. Nov 12 20:58:31.781947 kernel: ACPI: bus type drm_connector registered Nov 12 20:58:31.781963 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:58:31.781980 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:58:31.781996 systemd-journald[1133]: Journal started Nov 12 20:58:31.782027 systemd-journald[1133]: Runtime Journal (/run/log/journal/c275d29670324f66bd492b63fe1efadb) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:58:31.537752 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:58:31.554172 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:58:31.554751 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:58:31.784704 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:58:31.785986 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:58:31.787072 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:58:31.788249 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:58:31.789438 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:58:31.790672 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:58:31.792161 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:58:31.793810 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:58:31.794020 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:58:31.795513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:58:31.795694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:58:31.797169 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:58:31.797372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:58:31.798752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:58:31.798918 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:58:31.800404 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:58:31.800566 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:58:31.801913 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:58:31.802081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:58:31.803442 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:58:31.804810 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:58:31.806301 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:58:31.821066 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:58:31.831773 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:58:31.834176 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:58:31.835312 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:58:31.835343 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:58:31.837346 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:58:31.839826 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:58:31.844796 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:58:31.845994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:58:31.848884 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:58:31.852571 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:58:31.854074 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:58:31.856156 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:58:31.857601 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:58:31.863504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:58:31.869536 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:58:31.872647 systemd-journald[1133]: Time spent on flushing to /var/log/journal/c275d29670324f66bd492b63fe1efadb is 23.862ms for 996 entries. Nov 12 20:58:31.872647 systemd-journald[1133]: System Journal (/var/log/journal/c275d29670324f66bd492b63fe1efadb) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:58:31.909691 systemd-journald[1133]: Received client request to flush runtime journal. Nov 12 20:58:31.881905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:58:31.888070 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:58:31.889386 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:58:31.890951 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:58:31.892477 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:58:31.896754 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:58:31.905858 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:58:31.912448 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:58:31.916857 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:58:31.920461 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:58:31.920698 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 20:58:31.930875 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:58:31.933174 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Nov 12 20:58:31.933192 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Nov 12 20:58:31.935351 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:58:31.936013 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:58:31.942317 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:58:31.946013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:58:31.949765 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:58:31.954405 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:58:31.977712 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 20:58:31.978485 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:58:31.987847 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:58:32.007713 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Nov 12 20:58:32.007735 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Nov 12 20:58:32.013464 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:58:32.023710 kernel: loop2: detected capacity change from 0 to 211296 Nov 12 20:58:32.048705 kernel: loop3: detected capacity change from 0 to 142488 Nov 12 20:58:32.065490 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:58:32.075705 kernel: loop5: detected capacity change from 0 to 211296 Nov 12 20:58:32.081738 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:58:32.082326 (sd-merge)[1205]: Merged extensions into '/usr'. Nov 12 20:58:32.086646 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:58:32.086663 systemd[1]: Reloading... Nov 12 20:58:32.141447 zram_generator::config[1228]: No configuration found. Nov 12 20:58:32.176566 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:58:32.257918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:58:32.308903 systemd[1]: Reloading finished in 221 ms. Nov 12 20:58:32.347260 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:58:32.348992 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:58:32.362848 systemd[1]: Starting ensure-sysext.service... Nov 12 20:58:32.364821 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:58:32.372933 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:58:32.372951 systemd[1]: Reloading... Nov 12 20:58:32.389004 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:58:32.389381 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:58:32.390408 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:58:32.390716 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Nov 12 20:58:32.390799 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Nov 12 20:58:32.394156 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:58:32.394169 systemd-tmpfiles[1269]: Skipping /boot Nov 12 20:58:32.406803 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:58:32.406817 systemd-tmpfiles[1269]: Skipping /boot Nov 12 20:58:32.427704 zram_generator::config[1302]: No configuration found. Nov 12 20:58:32.532295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:58:32.590148 systemd[1]: Reloading finished in 216 ms. Nov 12 20:58:32.608942 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:58:32.621355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:58:32.631426 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:58:32.634134 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:58:32.636534 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:58:32.640974 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:58:32.644827 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:58:32.651777 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:58:32.656627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:58:32.656841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:58:32.658859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:58:32.662748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:58:32.665988 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:58:32.667410 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:58:32.669927 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:58:32.671267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:58:32.672922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:58:32.673522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:58:32.677830 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:58:32.678018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:58:32.683367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:58:32.683577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:58:32.688324 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:58:32.693374 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:58:32.694115 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Nov 12 20:58:32.697152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:58:32.697353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:58:32.705674 augenrules[1366]: No rules Nov 12 20:58:32.705181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:58:32.710056 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:58:32.714855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:58:32.716374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:58:32.718540 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:58:32.719740 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:58:32.720958 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:58:32.722773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:58:32.722964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:58:32.724651 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:58:32.727011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:58:32.727607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:58:32.729779 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:58:32.730215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:58:32.732152 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:58:32.743594 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:58:32.754475 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:58:32.759461 systemd[1]: Finished ensure-sysext.service. Nov 12 20:58:32.764084 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:58:32.764243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:58:32.771948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:58:32.774956 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:58:32.778929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:58:32.785711 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1395) Nov 12 20:58:32.792011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:58:32.793328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:58:32.796066 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:58:32.804008 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:58:32.805494 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:58:32.805531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:58:32.806391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:58:32.806590 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:58:32.809825 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1395) Nov 12 20:58:32.810108 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:58:32.810317 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:58:32.811948 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:58:32.812170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:58:32.813843 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:58:32.814060 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:58:32.823295 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:58:32.825133 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:58:32.825239 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:58:32.835763 systemd-resolved[1339]: Positive Trust Anchors: Nov 12 20:58:32.835787 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:58:32.835819 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:58:32.837737 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Nov 12 20:58:32.841923 systemd-resolved[1339]: Defaulting to hostname 'linux'. Nov 12 20:58:32.846437 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:58:32.851484 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:58:32.869710 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:58:32.874718 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:58:32.896289 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 12 20:58:32.896570 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:58:32.896588 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:58:32.896768 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:58:32.896940 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:58:32.907337 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:58:32.915866 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:58:32.917735 systemd-networkd[1411]: lo: Link UP Nov 12 20:58:32.917746 systemd-networkd[1411]: lo: Gained carrier Nov 12 20:58:32.918504 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:58:32.919877 systemd-networkd[1411]: Enumeration completed Nov 12 20:58:32.920237 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:58:32.920298 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:58:32.920303 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:58:32.921738 systemd[1]: Reached target network.target - Network. Nov 12 20:58:32.921858 systemd-networkd[1411]: eth0: Link UP Nov 12 20:58:32.921870 systemd-networkd[1411]: eth0: Gained carrier Nov 12 20:58:32.921884 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:58:32.923012 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:58:32.926450 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:58:32.935819 systemd-networkd[1411]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:58:32.936587 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Nov 12 20:58:33.417583 systemd-resolved[1339]: Clock change detected. Flushing caches. Nov 12 20:58:33.417647 systemd-timesyncd[1414]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:58:33.417701 systemd-timesyncd[1414]: Initial clock synchronization to Tue 2024-11-12 20:58:33.417470 UTC. Nov 12 20:58:33.417817 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:58:33.432982 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:58:33.434278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:58:33.440694 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:58:33.440935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:58:33.450216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:58:33.500481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:58:33.528444 kernel: kvm_amd: TSC scaling supported Nov 12 20:58:33.528537 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:58:33.528551 kernel: kvm_amd: Nested Paging enabled Nov 12 20:58:33.529419 kernel: kvm_amd: LBR virtualization supported Nov 12 20:58:33.529437 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:58:33.530415 kernel: kvm_amd: Virtual GIF supported Nov 12 20:58:33.550386 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:58:33.577353 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:58:33.590121 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:58:33.597758 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:58:33.628331 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:58:33.629827 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:58:33.630952 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:58:33.632146 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:58:33.633415 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:58:33.634863 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:58:33.636097 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:58:33.637370 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:58:33.638623 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:58:33.638652 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:58:33.639576 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:58:33.641210 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:58:33.643811 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:58:33.657746 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:58:33.660338 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:58:33.661913 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:58:33.663083 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:58:33.664072 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:58:33.665063 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:58:33.665091 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:58:33.666075 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:58:33.668114 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:58:33.672133 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:58:33.675032 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:58:33.676110 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:58:33.677292 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:58:33.678871 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:58:33.681285 jq[1449]: false Nov 12 20:58:33.682806 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:58:33.686274 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:58:33.690397 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:58:33.695431 extend-filesystems[1450]: Found loop3 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found loop4 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found loop5 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found sr0 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found vda Nov 12 20:58:33.695431 extend-filesystems[1450]: Found vda1 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found vda2 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found vda3 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found usr Nov 12 20:58:33.695431 extend-filesystems[1450]: Found vda4 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found vda6 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found vda7 Nov 12 20:58:33.695431 extend-filesystems[1450]: Found vda9 Nov 12 20:58:33.695431 extend-filesystems[1450]: Checking size of /dev/vda9 Nov 12 20:58:33.737409 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:58:33.737436 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1395) Nov 12 20:58:33.737461 extend-filesystems[1450]: Resized partition /dev/vda9 Nov 12 20:58:33.700146 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:58:33.708107 dbus-daemon[1448]: [system] SELinux support is enabled Nov 12 20:58:33.739307 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:58:33.704197 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:58:33.704737 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:58:33.741952 update_engine[1464]: I20241112 20:58:33.728667 1464 main.cc:92] Flatcar Update Engine starting Nov 12 20:58:33.741952 update_engine[1464]: I20241112 20:58:33.733583 1464 update_check_scheduler.cc:74] Next update check in 3m3s Nov 12 20:58:33.705548 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:58:33.713213 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:58:33.742390 jq[1469]: true Nov 12 20:58:33.715470 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:58:33.721090 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:58:33.722032 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:58:33.722371 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:58:33.722566 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:58:33.738842 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:58:33.751045 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:58:33.753705 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:58:33.753930 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:58:33.754870 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:58:33.782549 jq[1474]: true Nov 12 20:58:33.783587 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:58:33.783587 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:58:33.783587 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:58:33.779257 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:58:33.788309 extend-filesystems[1450]: Resized filesystem in /dev/vda9 Nov 12 20:58:33.791547 tar[1473]: linux-amd64/helm Nov 12 20:58:33.779517 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:58:33.786207 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:58:33.786227 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:58:33.788156 systemd-logind[1457]: New seat seat0. Nov 12 20:58:33.790215 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:58:33.798177 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:58:33.802312 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:58:33.802470 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:58:33.806035 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:58:33.806157 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:58:33.816399 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:58:33.829134 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:58:33.831476 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:58:33.834820 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:58:33.846552 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:58:33.856536 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:58:33.882156 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:58:33.891169 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:58:33.898458 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:58:33.898685 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:58:33.908272 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:58:33.918449 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:58:33.935344 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:58:33.938237 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:58:33.939505 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:58:33.963494 containerd[1475]: time="2024-11-12T20:58:33.963373539Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:58:33.984469 containerd[1475]: time="2024-11-12T20:58:33.984355026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:58:33.986261 containerd[1475]: time="2024-11-12T20:58:33.986212863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:58:33.986261 containerd[1475]: time="2024-11-12T20:58:33.986257106Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:58:33.986322 containerd[1475]: time="2024-11-12T20:58:33.986277575Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:58:33.986498 containerd[1475]: time="2024-11-12T20:58:33.986473062Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:58:33.986521 containerd[1475]: time="2024-11-12T20:58:33.986497397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:58:33.986598 containerd[1475]: time="2024-11-12T20:58:33.986578960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:58:33.986619 containerd[1475]: time="2024-11-12T20:58:33.986597485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:58:33.986826 containerd[1475]: time="2024-11-12T20:58:33.986798282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:58:33.986826 containerd[1475]: time="2024-11-12T20:58:33.986818209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:58:33.986875 containerd[1475]: time="2024-11-12T20:58:33.986831384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:58:33.986875 containerd[1475]: time="2024-11-12T20:58:33.986841713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:58:33.986957 containerd[1475]: time="2024-11-12T20:58:33.986940980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:58:33.987220 containerd[1475]: time="2024-11-12T20:58:33.987193353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:58:33.987358 containerd[1475]: time="2024-11-12T20:58:33.987332304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:58:33.987358 containerd[1475]: time="2024-11-12T20:58:33.987349737Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:58:33.987469 containerd[1475]: time="2024-11-12T20:58:33.987445957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:58:33.987528 containerd[1475]: time="2024-11-12T20:58:33.987507032Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:58:33.993671 containerd[1475]: time="2024-11-12T20:58:33.993625305Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:58:33.993748 containerd[1475]: time="2024-11-12T20:58:33.993689856Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:58:33.993748 containerd[1475]: time="2024-11-12T20:58:33.993705706Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:58:33.993748 containerd[1475]: time="2024-11-12T20:58:33.993735492Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:58:33.993887 containerd[1475]: time="2024-11-12T20:58:33.993751151Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:58:33.993887 containerd[1475]: time="2024-11-12T20:58:33.993876777Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:58:33.994138 containerd[1475]: time="2024-11-12T20:58:33.994118551Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:58:33.994246 containerd[1475]: time="2024-11-12T20:58:33.994228236Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:58:33.994279 containerd[1475]: time="2024-11-12T20:58:33.994246280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:58:33.994279 containerd[1475]: time="2024-11-12T20:58:33.994259665Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:58:33.994279 containerd[1475]: time="2024-11-12T20:58:33.994272810Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:58:33.994361 containerd[1475]: time="2024-11-12T20:58:33.994285073Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:58:33.994361 containerd[1475]: time="2024-11-12T20:58:33.994305972Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:58:33.994361 containerd[1475]: time="2024-11-12T20:58:33.994321882Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:58:33.994361 containerd[1475]: time="2024-11-12T20:58:33.994345847Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:58:33.994361 containerd[1475]: time="2024-11-12T20:58:33.994359393Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:58:33.994449 containerd[1475]: time="2024-11-12T20:58:33.994371495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:58:33.994449 containerd[1475]: time="2024-11-12T20:58:33.994383578Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:58:33.994449 containerd[1475]: time="2024-11-12T20:58:33.994403265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994449 containerd[1475]: time="2024-11-12T20:58:33.994422000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994449 containerd[1475]: time="2024-11-12T20:58:33.994434113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994449 containerd[1475]: time="2024-11-12T20:58:33.994446215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994557 containerd[1475]: time="2024-11-12T20:58:33.994460011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994557 containerd[1475]: time="2024-11-12T20:58:33.994473747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994557 containerd[1475]: time="2024-11-12T20:58:33.994486441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994557 containerd[1475]: time="2024-11-12T20:58:33.994499756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994557 containerd[1475]: time="2024-11-12T20:58:33.994512790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994557 containerd[1475]: time="2024-11-12T20:58:33.994527738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994557 containerd[1475]: time="2024-11-12T20:58:33.994544099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994557 containerd[1475]: time="2024-11-12T20:58:33.994556823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994725 containerd[1475]: time="2024-11-12T20:58:33.994573544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994725 containerd[1475]: time="2024-11-12T20:58:33.994593883Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:58:33.994725 containerd[1475]: time="2024-11-12T20:58:33.994612838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994725 containerd[1475]: time="2024-11-12T20:58:33.994624129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.994725 containerd[1475]: time="2024-11-12T20:58:33.994640931Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:58:33.994725 containerd[1475]: time="2024-11-12T20:58:33.994695273Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:58:33.994725 containerd[1475]: time="2024-11-12T20:58:33.994709450Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:58:33.994725 containerd[1475]: time="2024-11-12T20:58:33.994720480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:58:33.995030 containerd[1475]: time="2024-11-12T20:58:33.994732813Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:58:33.995030 containerd[1475]: time="2024-11-12T20:58:33.994742642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.995030 containerd[1475]: time="2024-11-12T20:58:33.994754995Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:58:33.995030 containerd[1475]: time="2024-11-12T20:58:33.994765765Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:58:33.995030 containerd[1475]: time="2024-11-12T20:58:33.994776415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:58:33.996008 containerd[1475]: time="2024-11-12T20:58:33.995238743Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:58:33.996148 containerd[1475]: time="2024-11-12T20:58:33.996014569Z" level=info msg="Connect containerd service" Nov 12 20:58:33.996148 containerd[1475]: time="2024-11-12T20:58:33.996067739Z" level=info msg="using legacy CRI server" Nov 12 20:58:33.996148 containerd[1475]: time="2024-11-12T20:58:33.996075844Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:58:33.996206 containerd[1475]: time="2024-11-12T20:58:33.996186792Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:58:33.997015 containerd[1475]: time="2024-11-12T20:58:33.996991332Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:58:33.997224 containerd[1475]: time="2024-11-12T20:58:33.997148016Z" level=info msg="Start subscribing containerd event" Nov 12 20:58:33.997576 containerd[1475]: time="2024-11-12T20:58:33.997314929Z" level=info msg="Start recovering state" Nov 12 20:58:33.997576 containerd[1475]: time="2024-11-12T20:58:33.997329998Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:58:33.997576 containerd[1475]: time="2024-11-12T20:58:33.997397404Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:58:33.997576 containerd[1475]: time="2024-11-12T20:58:33.997417001Z" level=info msg="Start event monitor" Nov 12 20:58:33.997576 containerd[1475]: time="2024-11-12T20:58:33.997438732Z" level=info msg="Start snapshots syncer" Nov 12 20:58:33.997576 containerd[1475]: time="2024-11-12T20:58:33.997451085Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:58:33.997576 containerd[1475]: time="2024-11-12T20:58:33.997459761Z" level=info msg="Start streaming server" Nov 12 20:58:33.997576 containerd[1475]: time="2024-11-12T20:58:33.997547456Z" level=info msg="containerd successfully booted in 0.036076s" Nov 12 20:58:33.997629 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:58:34.132262 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:58:34.134625 systemd[1]: Started sshd@0-10.0.0.160:22-10.0.0.1:38856.service - OpenSSH per-connection server daemon (10.0.0.1:38856). Nov 12 20:58:34.154525 tar[1473]: linux-amd64/LICENSE Nov 12 20:58:34.154525 tar[1473]: linux-amd64/README.md Nov 12 20:58:34.169570 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:58:34.181869 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 38856 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:58:34.183921 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:34.192978 systemd-logind[1457]: New session 1 of user core. Nov 12 20:58:34.194608 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:58:34.211214 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:58:34.223430 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:58:34.235203 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:58:34.238909 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:58:34.338455 systemd[1544]: Queued start job for default target default.target. Nov 12 20:58:34.354351 systemd[1544]: Created slice app.slice - User Application Slice. Nov 12 20:58:34.354379 systemd[1544]: Reached target paths.target - Paths. Nov 12 20:58:34.354393 systemd[1544]: Reached target timers.target - Timers. Nov 12 20:58:34.356102 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:58:34.368212 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:58:34.368383 systemd[1544]: Reached target sockets.target - Sockets. Nov 12 20:58:34.368405 systemd[1544]: Reached target basic.target - Basic System. Nov 12 20:58:34.368451 systemd[1544]: Reached target default.target - Main User Target. Nov 12 20:58:34.368487 systemd[1544]: Startup finished in 123ms. Nov 12 20:58:34.368844 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:58:34.371478 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:58:34.436714 systemd[1]: Started sshd@1-10.0.0.160:22-10.0.0.1:38858.service - OpenSSH per-connection server daemon (10.0.0.1:38858). Nov 12 20:58:34.474199 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 38858 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:58:34.475732 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:34.479874 systemd-logind[1457]: New session 2 of user core. Nov 12 20:58:34.489120 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:58:34.543566 sshd[1555]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:34.550773 systemd[1]: sshd@1-10.0.0.160:22-10.0.0.1:38858.service: Deactivated successfully. Nov 12 20:58:34.552461 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:58:34.553949 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:58:34.555197 systemd[1]: Started sshd@2-10.0.0.160:22-10.0.0.1:38870.service - OpenSSH per-connection server daemon (10.0.0.1:38870). Nov 12 20:58:34.557333 systemd-logind[1457]: Removed session 2. Nov 12 20:58:34.591557 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 38870 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:58:34.593072 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:34.597041 systemd-logind[1457]: New session 3 of user core. Nov 12 20:58:34.607096 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:58:34.661492 sshd[1562]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:34.665410 systemd[1]: sshd@2-10.0.0.160:22-10.0.0.1:38870.service: Deactivated successfully. Nov 12 20:58:34.667207 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:58:34.667825 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:58:34.668660 systemd-logind[1457]: Removed session 3. Nov 12 20:58:34.882177 systemd-networkd[1411]: eth0: Gained IPv6LL Nov 12 20:58:34.885383 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:58:34.887190 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:58:34.900334 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:58:34.903150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:58:34.905804 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:58:34.928349 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:58:34.930168 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:58:34.930396 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:58:34.932766 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:58:35.503481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:58:35.505224 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:58:35.506519 systemd[1]: Startup finished in 723ms (kernel) + 5.322s (initrd) + 4.042s (userspace) = 10.088s. Nov 12 20:58:35.508719 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:58:35.972914 kubelet[1590]: E1112 20:58:35.972736 1590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:58:35.977134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:58:35.977341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:58:44.672420 systemd[1]: Started sshd@3-10.0.0.160:22-10.0.0.1:60592.service - OpenSSH per-connection server daemon (10.0.0.1:60592). Nov 12 20:58:44.707985 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 60592 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:58:44.709416 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:44.713248 systemd-logind[1457]: New session 4 of user core. Nov 12 20:58:44.723095 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:58:44.778447 sshd[1604]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:44.785368 systemd[1]: sshd@3-10.0.0.160:22-10.0.0.1:60592.service: Deactivated successfully. Nov 12 20:58:44.787060 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:58:44.788536 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:58:44.808415 systemd[1]: Started sshd@4-10.0.0.160:22-10.0.0.1:60604.service - OpenSSH per-connection server daemon (10.0.0.1:60604). Nov 12 20:58:44.809436 systemd-logind[1457]: Removed session 4. Nov 12 20:58:44.838988 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 60604 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:58:44.840305 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:44.844090 systemd-logind[1457]: New session 5 of user core. Nov 12 20:58:44.854114 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:58:44.902791 sshd[1611]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:44.913738 systemd[1]: sshd@4-10.0.0.160:22-10.0.0.1:60604.service: Deactivated successfully. Nov 12 20:58:44.915251 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:58:44.916759 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:58:44.923218 systemd[1]: Started sshd@5-10.0.0.160:22-10.0.0.1:60612.service - OpenSSH per-connection server daemon (10.0.0.1:60612). Nov 12 20:58:44.924130 systemd-logind[1457]: Removed session 5. Nov 12 20:58:44.957134 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 60612 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:58:44.958764 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:44.962550 systemd-logind[1457]: New session 6 of user core. Nov 12 20:58:44.976110 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:58:45.030075 sshd[1618]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:45.046755 systemd[1]: sshd@5-10.0.0.160:22-10.0.0.1:60612.service: Deactivated successfully. Nov 12 20:58:45.048462 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:58:45.049820 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:58:45.051042 systemd[1]: Started sshd@6-10.0.0.160:22-10.0.0.1:60622.service - OpenSSH per-connection server daemon (10.0.0.1:60622). Nov 12 20:58:45.051838 systemd-logind[1457]: Removed session 6. Nov 12 20:58:45.086168 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 60622 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:58:45.087817 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:45.091925 systemd-logind[1457]: New session 7 of user core. Nov 12 20:58:45.109228 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:58:45.167417 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:58:45.167752 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:58:45.183734 sudo[1628]: pam_unix(sudo:session): session closed for user root Nov 12 20:58:45.185659 sshd[1625]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:45.200737 systemd[1]: sshd@6-10.0.0.160:22-10.0.0.1:60622.service: Deactivated successfully. Nov 12 20:58:45.203402 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:58:45.205269 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:58:45.207203 systemd[1]: Started sshd@7-10.0.0.160:22-10.0.0.1:60626.service - OpenSSH per-connection server daemon (10.0.0.1:60626). Nov 12 20:58:45.208092 systemd-logind[1457]: Removed session 7. Nov 12 20:58:45.242792 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 60626 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:58:45.244304 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:45.248650 systemd-logind[1457]: New session 8 of user core. Nov 12 20:58:45.263135 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:58:45.317469 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:58:45.317814 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:58:45.321929 sudo[1637]: pam_unix(sudo:session): session closed for user root Nov 12 20:58:45.329521 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:58:45.329952 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:58:45.348276 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:58:45.349806 auditctl[1640]: No rules Nov 12 20:58:45.351141 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:58:45.351407 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:58:45.353270 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:58:45.384172 augenrules[1658]: No rules Nov 12 20:58:45.386035 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:58:45.387584 sudo[1636]: pam_unix(sudo:session): session closed for user root Nov 12 20:58:45.389387 sshd[1633]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:45.399947 systemd[1]: sshd@7-10.0.0.160:22-10.0.0.1:60626.service: Deactivated successfully. Nov 12 20:58:45.401724 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:58:45.403387 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:58:45.413233 systemd[1]: Started sshd@8-10.0.0.160:22-10.0.0.1:60634.service - OpenSSH per-connection server daemon (10.0.0.1:60634). Nov 12 20:58:45.414424 systemd-logind[1457]: Removed session 8. Nov 12 20:58:45.444383 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 60634 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:58:45.445832 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:45.449705 systemd-logind[1457]: New session 9 of user core. Nov 12 20:58:45.459081 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:58:45.511262 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:58:45.511607 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:58:45.798319 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:58:45.798395 (dockerd)[1688]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:58:45.985476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:58:46.002167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:58:46.080454 dockerd[1688]: time="2024-11-12T20:58:46.080331259Z" level=info msg="Starting up" Nov 12 20:58:46.142256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:58:46.146699 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:58:46.589471 kubelet[1720]: E1112 20:58:46.589332 1720 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:58:46.597474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:58:46.597703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:58:46.649660 dockerd[1688]: time="2024-11-12T20:58:46.649596408Z" level=info msg="Loading containers: start." Nov 12 20:58:46.758007 kernel: Initializing XFRM netlink socket Nov 12 20:58:46.836872 systemd-networkd[1411]: docker0: Link UP Nov 12 20:58:46.857558 dockerd[1688]: time="2024-11-12T20:58:46.857463179Z" level=info msg="Loading containers: done." Nov 12 20:58:46.871445 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3314065826-merged.mount: Deactivated successfully. Nov 12 20:58:46.873114 dockerd[1688]: time="2024-11-12T20:58:46.873071212Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:58:46.873204 dockerd[1688]: time="2024-11-12T20:58:46.873180227Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:58:46.873334 dockerd[1688]: time="2024-11-12T20:58:46.873308077Z" level=info msg="Daemon has completed initialization" Nov 12 20:58:46.910510 dockerd[1688]: time="2024-11-12T20:58:46.910434735Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:58:46.910671 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:58:47.529434 containerd[1475]: time="2024-11-12T20:58:47.529389340Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:58:48.172789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975202808.mount: Deactivated successfully. Nov 12 20:58:49.658941 containerd[1475]: time="2024-11-12T20:58:49.658876872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:49.659653 containerd[1475]: time="2024-11-12T20:58:49.659607593Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 20:58:49.660629 containerd[1475]: time="2024-11-12T20:58:49.660593684Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:49.663151 containerd[1475]: time="2024-11-12T20:58:49.663119354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:49.664216 containerd[1475]: time="2024-11-12T20:58:49.664176117Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.134742253s" Nov 12 20:58:49.664253 containerd[1475]: time="2024-11-12T20:58:49.664217314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:58:49.686027 containerd[1475]: time="2024-11-12T20:58:49.685993644Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:58:51.668337 containerd[1475]: time="2024-11-12T20:58:51.668269631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:51.669070 containerd[1475]: time="2024-11-12T20:58:51.669021111Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 20:58:51.670201 containerd[1475]: time="2024-11-12T20:58:51.670160419Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:51.674674 containerd[1475]: time="2024-11-12T20:58:51.674638644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:51.675522 containerd[1475]: time="2024-11-12T20:58:51.675490573Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 1.989466241s" Nov 12 20:58:51.675568 containerd[1475]: time="2024-11-12T20:58:51.675524767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:58:51.698158 containerd[1475]: time="2024-11-12T20:58:51.698110806Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:58:53.015465 containerd[1475]: time="2024-11-12T20:58:53.015395888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:53.059555 containerd[1475]: time="2024-11-12T20:58:53.059471017Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 20:58:53.092150 containerd[1475]: time="2024-11-12T20:58:53.092090357Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:53.111400 containerd[1475]: time="2024-11-12T20:58:53.111345224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:53.112543 containerd[1475]: time="2024-11-12T20:58:53.112485504Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.4143335s" Nov 12 20:58:53.112543 containerd[1475]: time="2024-11-12T20:58:53.112526201Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:58:53.134010 containerd[1475]: time="2024-11-12T20:58:53.133956350Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:58:54.830417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884164276.mount: Deactivated successfully. Nov 12 20:58:56.084351 containerd[1475]: time="2024-11-12T20:58:56.084281488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:56.085054 containerd[1475]: time="2024-11-12T20:58:56.085008933Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 20:58:56.086537 containerd[1475]: time="2024-11-12T20:58:56.086505211Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:56.088604 containerd[1475]: time="2024-11-12T20:58:56.088578061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:56.089162 containerd[1475]: time="2024-11-12T20:58:56.089130568Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 2.955122391s" Nov 12 20:58:56.089226 containerd[1475]: time="2024-11-12T20:58:56.089164061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:58:56.111367 containerd[1475]: time="2024-11-12T20:58:56.111323901Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:58:56.735543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:58:56.750118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:58:56.890303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:58:56.894498 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:58:57.025204 kubelet[1966]: E1112 20:58:57.025053 1966 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:58:57.030042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:58:57.030255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:58:57.485906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583009238.mount: Deactivated successfully. Nov 12 20:58:58.168433 containerd[1475]: time="2024-11-12T20:58:58.168375836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:58.169133 containerd[1475]: time="2024-11-12T20:58:58.169086079Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:58:58.170270 containerd[1475]: time="2024-11-12T20:58:58.170214066Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:58.173107 containerd[1475]: time="2024-11-12T20:58:58.173064836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:58.174262 containerd[1475]: time="2024-11-12T20:58:58.174221316Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.062859785s" Nov 12 20:58:58.174262 containerd[1475]: time="2024-11-12T20:58:58.174259328Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:58:58.195416 containerd[1475]: time="2024-11-12T20:58:58.195371962Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:58:58.697869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076983809.mount: Deactivated successfully. Nov 12 20:58:58.703393 containerd[1475]: time="2024-11-12T20:58:58.703346010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:58.704008 containerd[1475]: time="2024-11-12T20:58:58.703940736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:58:58.705059 containerd[1475]: time="2024-11-12T20:58:58.705022847Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:58.707190 containerd[1475]: time="2024-11-12T20:58:58.707155900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:58:58.708084 containerd[1475]: time="2024-11-12T20:58:58.708047754Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 512.629987ms" Nov 12 20:58:58.708124 containerd[1475]: time="2024-11-12T20:58:58.708085786Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:58:58.729863 containerd[1475]: time="2024-11-12T20:58:58.729804767Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:58:59.210319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980267520.mount: Deactivated successfully. Nov 12 20:59:01.504577 containerd[1475]: time="2024-11-12T20:59:01.504516025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:01.505698 containerd[1475]: time="2024-11-12T20:59:01.505625798Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 20:59:01.506730 containerd[1475]: time="2024-11-12T20:59:01.506701567Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:01.509513 containerd[1475]: time="2024-11-12T20:59:01.509479771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:01.510675 containerd[1475]: time="2024-11-12T20:59:01.510638666Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.780795797s" Nov 12 20:59:01.510675 containerd[1475]: time="2024-11-12T20:59:01.510671307Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:59:04.077955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:59:04.088190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:59:04.104287 systemd[1]: Reloading requested from client PID 2162 ('systemctl') (unit session-9.scope)... Nov 12 20:59:04.104300 systemd[1]: Reloading... Nov 12 20:59:04.174751 zram_generator::config[2204]: No configuration found. Nov 12 20:59:04.492464 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:59:04.567594 systemd[1]: Reloading finished in 462 ms. Nov 12 20:59:04.613876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:59:04.616871 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:59:04.618959 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:59:04.619263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:59:04.621032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:59:04.758789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:59:04.763434 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:59:04.798726 kubelet[2251]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:59:04.798726 kubelet[2251]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:59:04.798726 kubelet[2251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:59:04.799692 kubelet[2251]: I1112 20:59:04.799647 2251 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:59:05.058290 kubelet[2251]: I1112 20:59:05.058185 2251 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:59:05.058290 kubelet[2251]: I1112 20:59:05.058212 2251 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:59:05.058436 kubelet[2251]: I1112 20:59:05.058422 2251 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:59:05.072649 kubelet[2251]: E1112 20:59:05.072625 2251 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:05.075306 kubelet[2251]: I1112 20:59:05.075279 2251 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:59:05.087150 kubelet[2251]: I1112 20:59:05.087109 2251 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:59:05.088176 kubelet[2251]: I1112 20:59:05.088145 2251 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:59:05.088340 kubelet[2251]: I1112 20:59:05.088312 2251 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:59:05.088446 kubelet[2251]: I1112 20:59:05.088344 2251 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:59:05.088446 kubelet[2251]: I1112 20:59:05.088356 2251 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:59:05.088507 kubelet[2251]: I1112 20:59:05.088486 2251 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:59:05.088625 kubelet[2251]: I1112 20:59:05.088600 2251 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:59:05.088625 kubelet[2251]: I1112 20:59:05.088618 2251 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:59:05.088693 kubelet[2251]: I1112 20:59:05.088648 2251 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:59:05.088693 kubelet[2251]: I1112 20:59:05.088667 2251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:59:05.089649 kubelet[2251]: I1112 20:59:05.089619 2251 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:59:05.090081 kubelet[2251]: W1112 20:59:05.090042 2251 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:05.090156 kubelet[2251]: E1112 20:59:05.090090 2251 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:05.090457 kubelet[2251]: W1112 20:59:05.090422 2251 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:05.090506 kubelet[2251]: E1112 20:59:05.090463 2251 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:05.092296 kubelet[2251]: I1112 20:59:05.092276 2251 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:59:05.092345 kubelet[2251]: W1112 20:59:05.092326 2251 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:59:05.093013 kubelet[2251]: I1112 20:59:05.092813 2251 server.go:1256] "Started kubelet" Nov 12 20:59:05.093013 kubelet[2251]: I1112 20:59:05.092852 2251 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:59:05.094290 kubelet[2251]: I1112 20:59:05.093537 2251 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:59:05.094290 kubelet[2251]: I1112 20:59:05.093779 2251 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:59:05.094290 kubelet[2251]: I1112 20:59:05.094036 2251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:59:05.094290 kubelet[2251]: I1112 20:59:05.094122 2251 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:59:05.094964 kubelet[2251]: I1112 20:59:05.094440 2251 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:59:05.094964 kubelet[2251]: I1112 20:59:05.094497 2251 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:59:05.094964 kubelet[2251]: I1112 20:59:05.094541 2251 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:59:05.094964 kubelet[2251]: W1112 20:59:05.094729 2251 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:05.094964 kubelet[2251]: E1112 20:59:05.094756 2251 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:05.098141 kubelet[2251]: E1112 20:59:05.098126 2251 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.160:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.160:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1807542f9554b247 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:59:05.092796999 +0000 UTC m=+0.325409785,LastTimestamp:2024-11-12 20:59:05.092796999 +0000 UTC m=+0.325409785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:59:05.098453 kubelet[2251]: I1112 20:59:05.098433 2251 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:59:05.098553 kubelet[2251]: I1112 20:59:05.098533 2251 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:59:05.098807 kubelet[2251]: E1112 20:59:05.098785 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="200ms" Nov 12 20:59:05.099277 kubelet[2251]: E1112 20:59:05.099264 2251 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:59:05.099557 kubelet[2251]: I1112 20:59:05.099541 2251 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:59:05.110627 kubelet[2251]: I1112 20:59:05.110512 2251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:59:05.112084 kubelet[2251]: I1112 20:59:05.112027 2251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:59:05.112084 kubelet[2251]: I1112 20:59:05.112056 2251 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:59:05.112140 kubelet[2251]: I1112 20:59:05.112091 2251 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:59:05.112172 kubelet[2251]: E1112 20:59:05.112141 2251 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:59:05.112918 kubelet[2251]: W1112 20:59:05.112880 2251 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:05.112961 kubelet[2251]: E1112 20:59:05.112922 2251 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:05.116153 kubelet[2251]: I1112 20:59:05.115828 2251 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:59:05.116153 kubelet[2251]: I1112 20:59:05.115859 2251 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:59:05.116153 kubelet[2251]: I1112 20:59:05.115876 2251 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:59:05.195460 kubelet[2251]: I1112 20:59:05.195443 2251 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:59:05.195760 kubelet[2251]: E1112 20:59:05.195741 2251 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Nov 12 20:59:05.213022 kubelet[2251]: E1112 20:59:05.212990 2251 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:59:05.299657 kubelet[2251]: E1112 20:59:05.299632 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="400ms" Nov 12 20:59:05.396637 kubelet[2251]: I1112 20:59:05.396578 2251 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:59:05.396774 kubelet[2251]: E1112 20:59:05.396758 2251 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Nov 12 20:59:05.413907 kubelet[2251]: E1112 20:59:05.413876 2251 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:59:05.700355 kubelet[2251]: E1112 20:59:05.700283 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="800ms" Nov 12 20:59:05.798406 kubelet[2251]: I1112 20:59:05.798391 2251 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:59:05.798623 kubelet[2251]: E1112 20:59:05.798593 2251 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Nov 12 20:59:05.814754 kubelet[2251]: E1112 20:59:05.814729 2251 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:59:06.076419 kubelet[2251]: I1112 20:59:06.076394 2251 policy_none.go:49] "None policy: Start" Nov 12 20:59:06.076956 kubelet[2251]: I1112 20:59:06.076934 2251 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:59:06.076956 kubelet[2251]: I1112 20:59:06.076958 2251 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:59:06.139917 kubelet[2251]: W1112 20:59:06.139895 2251 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:06.139917 kubelet[2251]: E1112 20:59:06.139921 2251 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:06.167536 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:59:06.179595 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:59:06.182416 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:59:06.196831 kubelet[2251]: I1112 20:59:06.196789 2251 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:59:06.197212 kubelet[2251]: I1112 20:59:06.197121 2251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:59:06.198154 kubelet[2251]: E1112 20:59:06.198104 2251 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:59:06.501085 kubelet[2251]: E1112 20:59:06.501038 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="1.6s" Nov 12 20:59:06.529292 kubelet[2251]: W1112 20:59:06.529232 2251 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:06.529328 kubelet[2251]: E1112 20:59:06.529301 2251 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:06.600268 kubelet[2251]: I1112 20:59:06.600239 2251 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:59:06.600670 kubelet[2251]: E1112 20:59:06.600645 2251 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Nov 12 20:59:06.615892 kubelet[2251]: I1112 20:59:06.615860 2251 topology_manager.go:215] "Topology Admit Handler" podUID="3800f700f6ac7257a1b21f6f4ac9ec7f" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:59:06.616618 kubelet[2251]: I1112 20:59:06.616576 2251 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:59:06.617188 kubelet[2251]: I1112 20:59:06.617165 2251 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:59:06.623572 systemd[1]: Created slice kubepods-burstable-pod3800f700f6ac7257a1b21f6f4ac9ec7f.slice - libcontainer container kubepods-burstable-pod3800f700f6ac7257a1b21f6f4ac9ec7f.slice. Nov 12 20:59:06.633934 kubelet[2251]: W1112 20:59:06.633886 2251 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:06.633934 kubelet[2251]: E1112 20:59:06.633934 2251 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:06.644701 kubelet[2251]: W1112 20:59:06.644666 2251 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:06.644748 kubelet[2251]: E1112 20:59:06.644707 2251 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:06.645774 systemd[1]: Created slice kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice - libcontainer container kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice. Nov 12 20:59:06.648927 systemd[1]: Created slice kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice - libcontainer container kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice. Nov 12 20:59:06.701665 kubelet[2251]: I1112 20:59:06.701648 2251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:06.701743 kubelet[2251]: I1112 20:59:06.701675 2251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:06.701743 kubelet[2251]: I1112 20:59:06.701711 2251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:06.701743 kubelet[2251]: I1112 20:59:06.701733 2251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:06.701856 kubelet[2251]: I1112 20:59:06.701751 2251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3800f700f6ac7257a1b21f6f4ac9ec7f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3800f700f6ac7257a1b21f6f4ac9ec7f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:59:06.701856 kubelet[2251]: I1112 20:59:06.701771 2251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3800f700f6ac7257a1b21f6f4ac9ec7f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3800f700f6ac7257a1b21f6f4ac9ec7f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:59:06.701856 kubelet[2251]: I1112 20:59:06.701790 2251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:06.701856 kubelet[2251]: I1112 20:59:06.701830 2251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3800f700f6ac7257a1b21f6f4ac9ec7f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3800f700f6ac7257a1b21f6f4ac9ec7f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:59:06.701995 kubelet[2251]: I1112 20:59:06.701882 2251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:59:06.944774 kubelet[2251]: E1112 20:59:06.944652 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:06.945656 containerd[1475]: time="2024-11-12T20:59:06.945613226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3800f700f6ac7257a1b21f6f4ac9ec7f,Namespace:kube-system,Attempt:0,}" Nov 12 20:59:06.947680 kubelet[2251]: E1112 20:59:06.947660 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:06.948148 containerd[1475]: time="2024-11-12T20:59:06.948101968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 20:59:06.951265 kubelet[2251]: E1112 20:59:06.951242 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:06.951558 containerd[1475]: time="2024-11-12T20:59:06.951530526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 20:59:07.217594 kubelet[2251]: E1112 20:59:07.217486 2251 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:07.768342 kubelet[2251]: E1112 20:59:07.768307 2251 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.160:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.160:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1807542f9554b247 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:59:05.092796999 +0000 UTC m=+0.325409785,LastTimestamp:2024-11-12 20:59:05.092796999 +0000 UTC m=+0.325409785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:59:08.102329 kubelet[2251]: E1112 20:59:08.102220 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="3.2s" Nov 12 20:59:08.201732 kubelet[2251]: I1112 20:59:08.201701 2251 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:59:08.202052 kubelet[2251]: E1112 20:59:08.202034 2251 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Nov 12 20:59:08.579883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510246558.mount: Deactivated successfully. Nov 12 20:59:08.587107 containerd[1475]: time="2024-11-12T20:59:08.587054043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:59:08.588007 containerd[1475]: time="2024-11-12T20:59:08.587954529Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:59:08.588856 containerd[1475]: time="2024-11-12T20:59:08.588827692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:59:08.589672 containerd[1475]: time="2024-11-12T20:59:08.589626603Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:59:08.590492 containerd[1475]: time="2024-11-12T20:59:08.590432727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:59:08.591279 containerd[1475]: time="2024-11-12T20:59:08.591241196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:59:08.592228 containerd[1475]: time="2024-11-12T20:59:08.592196576Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:59:08.594941 containerd[1475]: time="2024-11-12T20:59:08.594912601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.649211847s" Nov 12 20:59:08.596173 containerd[1475]: time="2024-11-12T20:59:08.596143409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:59:08.598518 containerd[1475]: time="2024-11-12T20:59:08.598474495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.650285972s" Nov 12 20:59:08.601305 containerd[1475]: time="2024-11-12T20:59:08.601275752Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.649696573s" Nov 12 20:59:08.747568 containerd[1475]: time="2024-11-12T20:59:08.747346572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:59:08.747568 containerd[1475]: time="2024-11-12T20:59:08.747409894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:59:08.747568 containerd[1475]: time="2024-11-12T20:59:08.747424121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:08.747568 containerd[1475]: time="2024-11-12T20:59:08.747497812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:08.747568 containerd[1475]: time="2024-11-12T20:59:08.747273242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:59:08.747568 containerd[1475]: time="2024-11-12T20:59:08.747343286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:59:08.747568 containerd[1475]: time="2024-11-12T20:59:08.747357413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:08.747568 containerd[1475]: time="2024-11-12T20:59:08.747443598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:08.753119 containerd[1475]: time="2024-11-12T20:59:08.752885876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:59:08.753119 containerd[1475]: time="2024-11-12T20:59:08.752937144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:59:08.753119 containerd[1475]: time="2024-11-12T20:59:08.752948936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:08.753119 containerd[1475]: time="2024-11-12T20:59:08.753030643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:08.773121 systemd[1]: Started cri-containerd-30ca67d63c566480f4ea0f0521ac94f4a5b2e773cdfe65bf92f94d5e8c48bd56.scope - libcontainer container 30ca67d63c566480f4ea0f0521ac94f4a5b2e773cdfe65bf92f94d5e8c48bd56. Nov 12 20:59:08.776899 systemd[1]: Started cri-containerd-b59c41a4d8ed653d9dc22a606a976b07996c472a611cdc61833e43055ab4424a.scope - libcontainer container b59c41a4d8ed653d9dc22a606a976b07996c472a611cdc61833e43055ab4424a. Nov 12 20:59:08.778676 systemd[1]: Started cri-containerd-d6be054f41cdb565c64a0c3aa4c4e3ab0c7a4d63fffc6a7943d9058af1b9ea4d.scope - libcontainer container d6be054f41cdb565c64a0c3aa4c4e3ab0c7a4d63fffc6a7943d9058af1b9ea4d. Nov 12 20:59:08.812140 containerd[1475]: time="2024-11-12T20:59:08.812059993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3800f700f6ac7257a1b21f6f4ac9ec7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"30ca67d63c566480f4ea0f0521ac94f4a5b2e773cdfe65bf92f94d5e8c48bd56\"" Nov 12 20:59:08.813663 kubelet[2251]: E1112 20:59:08.813632 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:08.816828 containerd[1475]: time="2024-11-12T20:59:08.816792519Z" level=info msg="CreateContainer within sandbox \"30ca67d63c566480f4ea0f0521ac94f4a5b2e773cdfe65bf92f94d5e8c48bd56\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:59:08.817984 containerd[1475]: time="2024-11-12T20:59:08.817930941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b59c41a4d8ed653d9dc22a606a976b07996c472a611cdc61833e43055ab4424a\"" Nov 12 20:59:08.818793 kubelet[2251]: E1112 20:59:08.818764 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:08.820212 containerd[1475]: time="2024-11-12T20:59:08.820084818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6be054f41cdb565c64a0c3aa4c4e3ab0c7a4d63fffc6a7943d9058af1b9ea4d\"" Nov 12 20:59:08.820443 containerd[1475]: time="2024-11-12T20:59:08.820421042Z" level=info msg="CreateContainer within sandbox \"b59c41a4d8ed653d9dc22a606a976b07996c472a611cdc61833e43055ab4424a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:59:08.821205 kubelet[2251]: E1112 20:59:08.821156 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:08.823757 containerd[1475]: time="2024-11-12T20:59:08.823729601Z" level=info msg="CreateContainer within sandbox \"d6be054f41cdb565c64a0c3aa4c4e3ab0c7a4d63fffc6a7943d9058af1b9ea4d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:59:08.844242 containerd[1475]: time="2024-11-12T20:59:08.844156970Z" level=info msg="CreateContainer within sandbox \"b59c41a4d8ed653d9dc22a606a976b07996c472a611cdc61833e43055ab4424a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"28b0d3f62c0e54ee34ee6444f31da6d02f9afe40e61c6d8e902017c2a8060567\"" Nov 12 20:59:08.844584 containerd[1475]: time="2024-11-12T20:59:08.844562106Z" level=info msg="StartContainer for \"28b0d3f62c0e54ee34ee6444f31da6d02f9afe40e61c6d8e902017c2a8060567\"" Nov 12 20:59:08.849408 containerd[1475]: time="2024-11-12T20:59:08.849376270Z" level=info msg="CreateContainer within sandbox \"30ca67d63c566480f4ea0f0521ac94f4a5b2e773cdfe65bf92f94d5e8c48bd56\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9be764c309e77bc4814214f5126ecdcf22a37489f59a097e592f969f3df535e7\"" Nov 12 20:59:08.849807 containerd[1475]: time="2024-11-12T20:59:08.849774763Z" level=info msg="StartContainer for \"9be764c309e77bc4814214f5126ecdcf22a37489f59a097e592f969f3df535e7\"" Nov 12 20:59:08.854421 containerd[1475]: time="2024-11-12T20:59:08.854381350Z" level=info msg="CreateContainer within sandbox \"d6be054f41cdb565c64a0c3aa4c4e3ab0c7a4d63fffc6a7943d9058af1b9ea4d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"283922fdd2096f5c5b0bbb08bb30e1a2658ca3516d038fc7cd271ad031eb1c7e\"" Nov 12 20:59:08.854987 containerd[1475]: time="2024-11-12T20:59:08.854949859Z" level=info msg="StartContainer for \"283922fdd2096f5c5b0bbb08bb30e1a2658ca3516d038fc7cd271ad031eb1c7e\"" Nov 12 20:59:08.873099 systemd[1]: Started cri-containerd-28b0d3f62c0e54ee34ee6444f31da6d02f9afe40e61c6d8e902017c2a8060567.scope - libcontainer container 28b0d3f62c0e54ee34ee6444f31da6d02f9afe40e61c6d8e902017c2a8060567. Nov 12 20:59:08.877131 systemd[1]: Started cri-containerd-9be764c309e77bc4814214f5126ecdcf22a37489f59a097e592f969f3df535e7.scope - libcontainer container 9be764c309e77bc4814214f5126ecdcf22a37489f59a097e592f969f3df535e7. Nov 12 20:59:08.894106 systemd[1]: Started cri-containerd-283922fdd2096f5c5b0bbb08bb30e1a2658ca3516d038fc7cd271ad031eb1c7e.scope - libcontainer container 283922fdd2096f5c5b0bbb08bb30e1a2658ca3516d038fc7cd271ad031eb1c7e. Nov 12 20:59:08.927960 kubelet[2251]: W1112 20:59:08.927929 2251 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:08.927960 kubelet[2251]: E1112 20:59:08.927961 2251 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Nov 12 20:59:08.932946 containerd[1475]: time="2024-11-12T20:59:08.932770963Z" level=info msg="StartContainer for \"9be764c309e77bc4814214f5126ecdcf22a37489f59a097e592f969f3df535e7\" returns successfully" Nov 12 20:59:08.932946 containerd[1475]: time="2024-11-12T20:59:08.932898838Z" level=info msg="StartContainer for \"28b0d3f62c0e54ee34ee6444f31da6d02f9afe40e61c6d8e902017c2a8060567\" returns successfully" Nov 12 20:59:08.941221 containerd[1475]: time="2024-11-12T20:59:08.941178672Z" level=info msg="StartContainer for \"283922fdd2096f5c5b0bbb08bb30e1a2658ca3516d038fc7cd271ad031eb1c7e\" returns successfully" Nov 12 20:59:09.133296 kubelet[2251]: E1112 20:59:09.133187 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:09.133989 kubelet[2251]: E1112 20:59:09.133710 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:09.135473 kubelet[2251]: E1112 20:59:09.135442 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:10.048594 kubelet[2251]: E1112 20:59:10.048531 2251 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:59:10.092821 kubelet[2251]: I1112 20:59:10.092772 2251 apiserver.go:52] "Watching apiserver" Nov 12 20:59:10.094993 kubelet[2251]: I1112 20:59:10.094903 2251 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:59:10.134786 kubelet[2251]: E1112 20:59:10.134743 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:10.415537 kubelet[2251]: E1112 20:59:10.415428 2251 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:59:10.842948 kubelet[2251]: E1112 20:59:10.842915 2251 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:59:11.322944 kubelet[2251]: E1112 20:59:11.322904 2251 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 20:59:11.403357 kubelet[2251]: I1112 20:59:11.403327 2251 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:59:11.411094 kubelet[2251]: I1112 20:59:11.411048 2251 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:59:12.688238 systemd[1]: Reloading requested from client PID 2531 ('systemctl') (unit session-9.scope)... Nov 12 20:59:12.688255 systemd[1]: Reloading... Nov 12 20:59:12.755949 kubelet[2251]: E1112 20:59:12.755897 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:12.763996 zram_generator::config[2573]: No configuration found. Nov 12 20:59:12.890375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:59:12.978948 systemd[1]: Reloading finished in 290 ms. Nov 12 20:59:13.023417 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:59:13.032561 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:59:13.032872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:59:13.040219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:59:13.179459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:59:13.184588 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:59:13.233865 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:59:13.233865 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:59:13.233865 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:59:13.233865 kubelet[2615]: I1112 20:59:13.233799 2615 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:59:13.239881 kubelet[2615]: I1112 20:59:13.238949 2615 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:59:13.239881 kubelet[2615]: I1112 20:59:13.238981 2615 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:59:13.239881 kubelet[2615]: I1112 20:59:13.239457 2615 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:59:13.241682 kubelet[2615]: I1112 20:59:13.241660 2615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:59:13.243434 kubelet[2615]: I1112 20:59:13.243404 2615 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:59:13.253953 kubelet[2615]: I1112 20:59:13.253920 2615 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:59:13.254220 kubelet[2615]: I1112 20:59:13.254195 2615 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:59:13.254406 kubelet[2615]: I1112 20:59:13.254379 2615 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:59:13.254485 kubelet[2615]: I1112 20:59:13.254410 2615 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:59:13.254485 kubelet[2615]: I1112 20:59:13.254420 2615 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:59:13.254485 kubelet[2615]: I1112 20:59:13.254449 2615 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:59:13.254553 kubelet[2615]: I1112 20:59:13.254542 2615 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:59:13.254583 kubelet[2615]: I1112 20:59:13.254557 2615 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:59:13.254609 kubelet[2615]: I1112 20:59:13.254600 2615 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:59:13.254633 kubelet[2615]: I1112 20:59:13.254619 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:59:13.255422 kubelet[2615]: I1112 20:59:13.255386 2615 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:59:13.258245 kubelet[2615]: I1112 20:59:13.255609 2615 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:59:13.258245 kubelet[2615]: I1112 20:59:13.256454 2615 server.go:1256] "Started kubelet" Nov 12 20:59:13.258245 kubelet[2615]: I1112 20:59:13.256714 2615 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:59:13.258629 kubelet[2615]: I1112 20:59:13.258603 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:59:13.260822 kubelet[2615]: I1112 20:59:13.260793 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:59:13.261002 kubelet[2615]: I1112 20:59:13.260980 2615 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:59:13.261459 kubelet[2615]: I1112 20:59:13.261439 2615 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:59:13.266975 kubelet[2615]: I1112 20:59:13.266935 2615 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:59:13.267800 kubelet[2615]: I1112 20:59:13.267214 2615 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:59:13.267800 kubelet[2615]: I1112 20:59:13.267359 2615 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:59:13.274395 kubelet[2615]: I1112 20:59:13.274369 2615 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:59:13.274546 kubelet[2615]: I1112 20:59:13.274447 2615 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:59:13.276380 kubelet[2615]: E1112 20:59:13.276348 2615 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:59:13.277326 kubelet[2615]: I1112 20:59:13.277306 2615 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:59:13.279371 kubelet[2615]: I1112 20:59:13.279260 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:59:13.280722 kubelet[2615]: I1112 20:59:13.280431 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:59:13.280722 kubelet[2615]: I1112 20:59:13.280453 2615 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:59:13.280722 kubelet[2615]: I1112 20:59:13.280498 2615 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:59:13.280722 kubelet[2615]: E1112 20:59:13.280544 2615 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:59:13.313840 kubelet[2615]: I1112 20:59:13.313810 2615 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:59:13.313840 kubelet[2615]: I1112 20:59:13.313832 2615 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:59:13.313840 kubelet[2615]: I1112 20:59:13.313852 2615 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:59:13.314058 kubelet[2615]: I1112 20:59:13.314043 2615 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:59:13.314087 kubelet[2615]: I1112 20:59:13.314068 2615 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:59:13.314087 kubelet[2615]: I1112 20:59:13.314076 2615 policy_none.go:49] "None policy: Start" Nov 12 20:59:13.314663 kubelet[2615]: I1112 20:59:13.314518 2615 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:59:13.314663 kubelet[2615]: I1112 20:59:13.314544 2615 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:59:13.314804 kubelet[2615]: I1112 20:59:13.314701 2615 state_mem.go:75] "Updated machine memory state" Nov 12 20:59:13.318838 kubelet[2615]: I1112 20:59:13.318746 2615 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:59:13.319266 kubelet[2615]: I1112 20:59:13.319191 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:59:13.370923 kubelet[2615]: I1112 20:59:13.370898 2615 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:59:13.378035 kubelet[2615]: I1112 20:59:13.377433 2615 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 20:59:13.378035 kubelet[2615]: I1112 20:59:13.377487 2615 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:59:13.381353 kubelet[2615]: I1112 20:59:13.381318 2615 topology_manager.go:215] "Topology Admit Handler" podUID="3800f700f6ac7257a1b21f6f4ac9ec7f" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:59:13.381408 kubelet[2615]: I1112 20:59:13.381398 2615 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:59:13.381444 kubelet[2615]: I1112 20:59:13.381428 2615 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:59:13.394203 kubelet[2615]: E1112 20:59:13.393768 2615 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 20:59:13.568218 kubelet[2615]: I1112 20:59:13.568180 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3800f700f6ac7257a1b21f6f4ac9ec7f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3800f700f6ac7257a1b21f6f4ac9ec7f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:59:13.568218 kubelet[2615]: I1112 20:59:13.568222 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3800f700f6ac7257a1b21f6f4ac9ec7f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3800f700f6ac7257a1b21f6f4ac9ec7f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:59:13.568471 kubelet[2615]: I1112 20:59:13.568254 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:13.568471 kubelet[2615]: I1112 20:59:13.568284 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:13.568471 kubelet[2615]: I1112 20:59:13.568306 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:13.568471 kubelet[2615]: I1112 20:59:13.568324 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:59:13.568471 kubelet[2615]: I1112 20:59:13.568345 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3800f700f6ac7257a1b21f6f4ac9ec7f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3800f700f6ac7257a1b21f6f4ac9ec7f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:59:13.568602 kubelet[2615]: I1112 20:59:13.568365 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:13.568602 kubelet[2615]: I1112 20:59:13.568383 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:59:13.693088 kubelet[2615]: E1112 20:59:13.693049 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:13.693578 kubelet[2615]: E1112 20:59:13.693548 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:13.695250 kubelet[2615]: E1112 20:59:13.695228 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:14.255689 kubelet[2615]: I1112 20:59:14.255651 2615 apiserver.go:52] "Watching apiserver" Nov 12 20:59:14.268057 kubelet[2615]: I1112 20:59:14.268025 2615 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:59:14.298551 kubelet[2615]: E1112 20:59:14.298513 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:14.299202 kubelet[2615]: E1112 20:59:14.299184 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:14.299618 kubelet[2615]: E1112 20:59:14.299601 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:14.342271 kubelet[2615]: I1112 20:59:14.342228 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.342040906 podStartE2EDuration="2.342040906s" podCreationTimestamp="2024-11-12 20:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:59:14.335059177 +0000 UTC m=+1.146182428" watchObservedRunningTime="2024-11-12 20:59:14.342040906 +0000 UTC m=+1.153164157" Nov 12 20:59:14.350988 kubelet[2615]: I1112 20:59:14.348931 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.348900562 podStartE2EDuration="1.348900562s" podCreationTimestamp="2024-11-12 20:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:59:14.342369111 +0000 UTC m=+1.153492362" watchObservedRunningTime="2024-11-12 20:59:14.348900562 +0000 UTC m=+1.160023814" Nov 12 20:59:15.300066 kubelet[2615]: E1112 20:59:15.300030 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:16.862261 kubelet[2615]: E1112 20:59:16.862225 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:17.463075 kubelet[2615]: E1112 20:59:17.463026 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:17.508671 sudo[1669]: pam_unix(sudo:session): session closed for user root Nov 12 20:59:17.510758 sshd[1666]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:17.515196 systemd[1]: sshd@8-10.0.0.160:22-10.0.0.1:60634.service: Deactivated successfully. Nov 12 20:59:17.517485 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:59:17.517735 systemd[1]: session-9.scope: Consumed 4.792s CPU time, 193.0M memory peak, 0B memory swap peak. Nov 12 20:59:17.518249 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:59:17.519261 systemd-logind[1457]: Removed session 9. Nov 12 20:59:19.420098 update_engine[1464]: I20241112 20:59:19.420009 1464 update_attempter.cc:509] Updating boot flags... Nov 12 20:59:19.446666 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2712) Nov 12 20:59:19.473029 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2715) Nov 12 20:59:19.508048 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2715) Nov 12 20:59:20.237268 kubelet[2615]: E1112 20:59:20.237231 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:20.247287 kubelet[2615]: I1112 20:59:20.247252 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.247220182 podStartE2EDuration="7.247220182s" podCreationTimestamp="2024-11-12 20:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:59:14.349142233 +0000 UTC m=+1.160265484" watchObservedRunningTime="2024-11-12 20:59:20.247220182 +0000 UTC m=+7.058343433" Nov 12 20:59:20.306837 kubelet[2615]: E1112 20:59:20.306801 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:26.646500 kubelet[2615]: I1112 20:59:26.646466 2615 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:59:26.647035 kubelet[2615]: I1112 20:59:26.646951 2615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:59:26.647067 containerd[1475]: time="2024-11-12T20:59:26.646785208Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:59:26.866178 kubelet[2615]: E1112 20:59:26.866149 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:27.314639 kubelet[2615]: E1112 20:59:27.314611 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:27.467327 kubelet[2615]: E1112 20:59:27.467294 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:27.558936 kubelet[2615]: I1112 20:59:27.558892 2615 topology_manager.go:215] "Topology Admit Handler" podUID="5d07da9c-8e64-4a6d-b1c7-71c834c64521" podNamespace="kube-system" podName="kube-proxy-sgp7s" Nov 12 20:59:27.565210 systemd[1]: Created slice kubepods-besteffort-pod5d07da9c_8e64_4a6d_b1c7_71c834c64521.slice - libcontainer container kubepods-besteffort-pod5d07da9c_8e64_4a6d_b1c7_71c834c64521.slice. Nov 12 20:59:27.623586 kubelet[2615]: I1112 20:59:27.623272 2615 topology_manager.go:215] "Topology Admit Handler" podUID="2c89cf6f-ff42-4029-af81-e5b36363375c" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-rdsgf" Nov 12 20:59:27.632090 systemd[1]: Created slice kubepods-besteffort-pod2c89cf6f_ff42_4029_af81_e5b36363375c.slice - libcontainer container kubepods-besteffort-pod2c89cf6f_ff42_4029_af81_e5b36363375c.slice. Nov 12 20:59:27.663220 kubelet[2615]: I1112 20:59:27.663177 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d07da9c-8e64-4a6d-b1c7-71c834c64521-xtables-lock\") pod \"kube-proxy-sgp7s\" (UID: \"5d07da9c-8e64-4a6d-b1c7-71c834c64521\") " pod="kube-system/kube-proxy-sgp7s" Nov 12 20:59:27.663220 kubelet[2615]: I1112 20:59:27.663221 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d07da9c-8e64-4a6d-b1c7-71c834c64521-kube-proxy\") pod \"kube-proxy-sgp7s\" (UID: \"5d07da9c-8e64-4a6d-b1c7-71c834c64521\") " pod="kube-system/kube-proxy-sgp7s" Nov 12 20:59:27.663608 kubelet[2615]: I1112 20:59:27.663243 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d07da9c-8e64-4a6d-b1c7-71c834c64521-lib-modules\") pod \"kube-proxy-sgp7s\" (UID: \"5d07da9c-8e64-4a6d-b1c7-71c834c64521\") " pod="kube-system/kube-proxy-sgp7s" Nov 12 20:59:27.663608 kubelet[2615]: I1112 20:59:27.663300 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwl25\" (UniqueName: \"kubernetes.io/projected/5d07da9c-8e64-4a6d-b1c7-71c834c64521-kube-api-access-mwl25\") pod \"kube-proxy-sgp7s\" (UID: \"5d07da9c-8e64-4a6d-b1c7-71c834c64521\") " pod="kube-system/kube-proxy-sgp7s" Nov 12 20:59:27.663608 kubelet[2615]: I1112 20:59:27.663351 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rpk\" (UniqueName: \"kubernetes.io/projected/2c89cf6f-ff42-4029-af81-e5b36363375c-kube-api-access-p7rpk\") pod \"tigera-operator-56b74f76df-rdsgf\" (UID: \"2c89cf6f-ff42-4029-af81-e5b36363375c\") " pod="tigera-operator/tigera-operator-56b74f76df-rdsgf" Nov 12 20:59:27.663608 kubelet[2615]: I1112 20:59:27.663409 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c89cf6f-ff42-4029-af81-e5b36363375c-var-lib-calico\") pod \"tigera-operator-56b74f76df-rdsgf\" (UID: \"2c89cf6f-ff42-4029-af81-e5b36363375c\") " pod="tigera-operator/tigera-operator-56b74f76df-rdsgf" Nov 12 20:59:27.872622 kubelet[2615]: E1112 20:59:27.872523 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:27.873124 containerd[1475]: time="2024-11-12T20:59:27.873087730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgp7s,Uid:5d07da9c-8e64-4a6d-b1c7-71c834c64521,Namespace:kube-system,Attempt:0,}" Nov 12 20:59:27.897059 containerd[1475]: time="2024-11-12T20:59:27.896981428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:59:27.897159 containerd[1475]: time="2024-11-12T20:59:27.897032806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:59:27.897159 containerd[1475]: time="2024-11-12T20:59:27.897080395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:27.897211 containerd[1475]: time="2024-11-12T20:59:27.897158643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:27.922091 systemd[1]: Started cri-containerd-5a85536e90202a18162cd2d5bd98386facd4273a5f0cc1a951d3c63370f34ca0.scope - libcontainer container 5a85536e90202a18162cd2d5bd98386facd4273a5f0cc1a951d3c63370f34ca0. Nov 12 20:59:27.935737 containerd[1475]: time="2024-11-12T20:59:27.935698978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-rdsgf,Uid:2c89cf6f-ff42-4029-af81-e5b36363375c,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:59:27.942671 containerd[1475]: time="2024-11-12T20:59:27.942626770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgp7s,Uid:5d07da9c-8e64-4a6d-b1c7-71c834c64521,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a85536e90202a18162cd2d5bd98386facd4273a5f0cc1a951d3c63370f34ca0\"" Nov 12 20:59:27.943348 kubelet[2615]: E1112 20:59:27.943315 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:27.945356 containerd[1475]: time="2024-11-12T20:59:27.945281002Z" level=info msg="CreateContainer within sandbox \"5a85536e90202a18162cd2d5bd98386facd4273a5f0cc1a951d3c63370f34ca0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:59:27.961381 containerd[1475]: time="2024-11-12T20:59:27.961326631Z" level=info msg="CreateContainer within sandbox \"5a85536e90202a18162cd2d5bd98386facd4273a5f0cc1a951d3c63370f34ca0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d9c07ba611042f3eeed35c0dc34bc472cd1f3f9c126805eaac0d401bae74a78d\"" Nov 12 20:59:27.962173 containerd[1475]: time="2024-11-12T20:59:27.962063582Z" level=info msg="StartContainer for \"d9c07ba611042f3eeed35c0dc34bc472cd1f3f9c126805eaac0d401bae74a78d\"" Nov 12 20:59:27.963162 containerd[1475]: time="2024-11-12T20:59:27.963070915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:59:27.963643 containerd[1475]: time="2024-11-12T20:59:27.963607578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:59:27.963710 containerd[1475]: time="2024-11-12T20:59:27.963626373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:27.963793 containerd[1475]: time="2024-11-12T20:59:27.963722585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:27.981222 systemd[1]: Started cri-containerd-b7f964e54893f50a224258429139d87fde7d5e766d433829a1c3713c6a2d8b37.scope - libcontainer container b7f964e54893f50a224258429139d87fde7d5e766d433829a1c3713c6a2d8b37. Nov 12 20:59:27.999111 systemd[1]: Started cri-containerd-d9c07ba611042f3eeed35c0dc34bc472cd1f3f9c126805eaac0d401bae74a78d.scope - libcontainer container d9c07ba611042f3eeed35c0dc34bc472cd1f3f9c126805eaac0d401bae74a78d. Nov 12 20:59:28.028701 containerd[1475]: time="2024-11-12T20:59:28.028661098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-rdsgf,Uid:2c89cf6f-ff42-4029-af81-e5b36363375c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b7f964e54893f50a224258429139d87fde7d5e766d433829a1c3713c6a2d8b37\"" Nov 12 20:59:28.030441 containerd[1475]: time="2024-11-12T20:59:28.030319467Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:59:28.036934 containerd[1475]: time="2024-11-12T20:59:28.036856357Z" level=info msg="StartContainer for \"d9c07ba611042f3eeed35c0dc34bc472cd1f3f9c126805eaac0d401bae74a78d\" returns successfully" Nov 12 20:59:28.317794 kubelet[2615]: E1112 20:59:28.317762 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:28.327663 kubelet[2615]: I1112 20:59:28.327206 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sgp7s" podStartSLOduration=1.327153432 podStartE2EDuration="1.327153432s" podCreationTimestamp="2024-11-12 20:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:59:28.326854017 +0000 UTC m=+15.137977268" watchObservedRunningTime="2024-11-12 20:59:28.327153432 +0000 UTC m=+15.138276683" Nov 12 20:59:33.320875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1487420558.mount: Deactivated successfully. Nov 12 20:59:33.597769 containerd[1475]: time="2024-11-12T20:59:33.597652245Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:33.598440 containerd[1475]: time="2024-11-12T20:59:33.598385987Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763335" Nov 12 20:59:33.599485 containerd[1475]: time="2024-11-12T20:59:33.599453450Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:33.601571 containerd[1475]: time="2024-11-12T20:59:33.601505367Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:33.602152 containerd[1475]: time="2024-11-12T20:59:33.602113332Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 5.571750782s" Nov 12 20:59:33.602194 containerd[1475]: time="2024-11-12T20:59:33.602150482Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:59:33.603893 containerd[1475]: time="2024-11-12T20:59:33.603868199Z" level=info msg="CreateContainer within sandbox \"b7f964e54893f50a224258429139d87fde7d5e766d433829a1c3713c6a2d8b37\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:59:33.615854 containerd[1475]: time="2024-11-12T20:59:33.615813341Z" level=info msg="CreateContainer within sandbox \"b7f964e54893f50a224258429139d87fde7d5e766d433829a1c3713c6a2d8b37\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dcb9cb18e18636a20f767b23a241d86d0af11b7e28e882079a6b1541cbd22f1a\"" Nov 12 20:59:33.616291 containerd[1475]: time="2024-11-12T20:59:33.616258550Z" level=info msg="StartContainer for \"dcb9cb18e18636a20f767b23a241d86d0af11b7e28e882079a6b1541cbd22f1a\"" Nov 12 20:59:33.649114 systemd[1]: Started cri-containerd-dcb9cb18e18636a20f767b23a241d86d0af11b7e28e882079a6b1541cbd22f1a.scope - libcontainer container dcb9cb18e18636a20f767b23a241d86d0af11b7e28e882079a6b1541cbd22f1a. Nov 12 20:59:33.673253 containerd[1475]: time="2024-11-12T20:59:33.673061591Z" level=info msg="StartContainer for \"dcb9cb18e18636a20f767b23a241d86d0af11b7e28e882079a6b1541cbd22f1a\" returns successfully" Nov 12 20:59:34.333769 kubelet[2615]: I1112 20:59:34.333727 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-rdsgf" podStartSLOduration=1.7609816839999999 podStartE2EDuration="7.333689292s" podCreationTimestamp="2024-11-12 20:59:27 +0000 UTC" firstStartedPulling="2024-11-12 20:59:28.029742448 +0000 UTC m=+14.840865700" lastFinishedPulling="2024-11-12 20:59:33.602450056 +0000 UTC m=+20.413573308" observedRunningTime="2024-11-12 20:59:34.333409265 +0000 UTC m=+21.144532516" watchObservedRunningTime="2024-11-12 20:59:34.333689292 +0000 UTC m=+21.144812553" Nov 12 20:59:36.557155 kubelet[2615]: I1112 20:59:36.557099 2615 topology_manager.go:215] "Topology Admit Handler" podUID="e5cc8baa-618e-4c3a-ba92-e4313bf56274" podNamespace="calico-system" podName="calico-typha-799cc47688-kzpgt" Nov 12 20:59:36.567577 systemd[1]: Created slice kubepods-besteffort-pode5cc8baa_618e_4c3a_ba92_e4313bf56274.slice - libcontainer container kubepods-besteffort-pode5cc8baa_618e_4c3a_ba92_e4313bf56274.slice. Nov 12 20:59:36.615192 kubelet[2615]: I1112 20:59:36.614556 2615 topology_manager.go:215] "Topology Admit Handler" podUID="005fe702-22ba-4484-8c23-b07bf989ec13" podNamespace="calico-system" podName="calico-node-km48h" Nov 12 20:59:36.624960 systemd[1]: Created slice kubepods-besteffort-pod005fe702_22ba_4484_8c23_b07bf989ec13.slice - libcontainer container kubepods-besteffort-pod005fe702_22ba_4484_8c23_b07bf989ec13.slice. Nov 12 20:59:36.626843 kubelet[2615]: I1112 20:59:36.626662 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5cc8baa-618e-4c3a-ba92-e4313bf56274-tigera-ca-bundle\") pod \"calico-typha-799cc47688-kzpgt\" (UID: \"e5cc8baa-618e-4c3a-ba92-e4313bf56274\") " pod="calico-system/calico-typha-799cc47688-kzpgt" Nov 12 20:59:36.626843 kubelet[2615]: I1112 20:59:36.626709 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e5cc8baa-618e-4c3a-ba92-e4313bf56274-typha-certs\") pod \"calico-typha-799cc47688-kzpgt\" (UID: \"e5cc8baa-618e-4c3a-ba92-e4313bf56274\") " pod="calico-system/calico-typha-799cc47688-kzpgt" Nov 12 20:59:36.626843 kubelet[2615]: I1112 20:59:36.626731 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tcqf\" (UniqueName: \"kubernetes.io/projected/e5cc8baa-618e-4c3a-ba92-e4313bf56274-kube-api-access-4tcqf\") pod \"calico-typha-799cc47688-kzpgt\" (UID: \"e5cc8baa-618e-4c3a-ba92-e4313bf56274\") " pod="calico-system/calico-typha-799cc47688-kzpgt" Nov 12 20:59:36.729766 kubelet[2615]: I1112 20:59:36.727931 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/005fe702-22ba-4484-8c23-b07bf989ec13-lib-modules\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.729766 kubelet[2615]: I1112 20:59:36.727987 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/005fe702-22ba-4484-8c23-b07bf989ec13-var-lib-calico\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.729766 kubelet[2615]: I1112 20:59:36.728010 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/005fe702-22ba-4484-8c23-b07bf989ec13-node-certs\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.729766 kubelet[2615]: I1112 20:59:36.728032 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/005fe702-22ba-4484-8c23-b07bf989ec13-cni-bin-dir\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.729766 kubelet[2615]: I1112 20:59:36.728049 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xl28\" (UniqueName: \"kubernetes.io/projected/005fe702-22ba-4484-8c23-b07bf989ec13-kube-api-access-9xl28\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.729992 kubelet[2615]: I1112 20:59:36.728079 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/005fe702-22ba-4484-8c23-b07bf989ec13-xtables-lock\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.729992 kubelet[2615]: I1112 20:59:36.728099 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/005fe702-22ba-4484-8c23-b07bf989ec13-var-run-calico\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.729992 kubelet[2615]: I1112 20:59:36.728119 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/005fe702-22ba-4484-8c23-b07bf989ec13-policysync\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.729992 kubelet[2615]: I1112 20:59:36.728150 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/005fe702-22ba-4484-8c23-b07bf989ec13-tigera-ca-bundle\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.729992 kubelet[2615]: I1112 20:59:36.728170 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/005fe702-22ba-4484-8c23-b07bf989ec13-cni-net-dir\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.730266 kubelet[2615]: I1112 20:59:36.728199 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/005fe702-22ba-4484-8c23-b07bf989ec13-cni-log-dir\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.730266 kubelet[2615]: I1112 20:59:36.728220 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/005fe702-22ba-4484-8c23-b07bf989ec13-flexvol-driver-host\") pod \"calico-node-km48h\" (UID: \"005fe702-22ba-4484-8c23-b07bf989ec13\") " pod="calico-system/calico-node-km48h" Nov 12 20:59:36.751285 kubelet[2615]: I1112 20:59:36.751250 2615 topology_manager.go:215] "Topology Admit Handler" podUID="7323a1ea-5ba5-4a75-b521-01e3f15f8119" podNamespace="calico-system" podName="csi-node-driver-2vmjh" Nov 12 20:59:36.751524 kubelet[2615]: E1112 20:59:36.751505 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2vmjh" podUID="7323a1ea-5ba5-4a75-b521-01e3f15f8119" Nov 12 20:59:36.829720 kubelet[2615]: I1112 20:59:36.828804 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7323a1ea-5ba5-4a75-b521-01e3f15f8119-registration-dir\") pod \"csi-node-driver-2vmjh\" (UID: \"7323a1ea-5ba5-4a75-b521-01e3f15f8119\") " pod="calico-system/csi-node-driver-2vmjh" Nov 12 20:59:36.829720 kubelet[2615]: I1112 20:59:36.828838 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7323a1ea-5ba5-4a75-b521-01e3f15f8119-varrun\") pod \"csi-node-driver-2vmjh\" (UID: \"7323a1ea-5ba5-4a75-b521-01e3f15f8119\") " pod="calico-system/csi-node-driver-2vmjh" Nov 12 20:59:36.829720 kubelet[2615]: I1112 20:59:36.828917 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7323a1ea-5ba5-4a75-b521-01e3f15f8119-kubelet-dir\") pod \"csi-node-driver-2vmjh\" (UID: \"7323a1ea-5ba5-4a75-b521-01e3f15f8119\") " pod="calico-system/csi-node-driver-2vmjh" Nov 12 20:59:36.829720 kubelet[2615]: I1112 20:59:36.828939 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh654\" (UniqueName: \"kubernetes.io/projected/7323a1ea-5ba5-4a75-b521-01e3f15f8119-kube-api-access-rh654\") pod \"csi-node-driver-2vmjh\" (UID: \"7323a1ea-5ba5-4a75-b521-01e3f15f8119\") " pod="calico-system/csi-node-driver-2vmjh" Nov 12 20:59:36.829720 kubelet[2615]: I1112 20:59:36.829001 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7323a1ea-5ba5-4a75-b521-01e3f15f8119-socket-dir\") pod \"csi-node-driver-2vmjh\" (UID: \"7323a1ea-5ba5-4a75-b521-01e3f15f8119\") " pod="calico-system/csi-node-driver-2vmjh" Nov 12 20:59:36.831155 kubelet[2615]: E1112 20:59:36.831113 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.831155 kubelet[2615]: W1112 20:59:36.831132 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.831155 kubelet[2615]: E1112 20:59:36.831155 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.832462 kubelet[2615]: E1112 20:59:36.832420 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.832462 kubelet[2615]: W1112 20:59:36.832443 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.832462 kubelet[2615]: E1112 20:59:36.832462 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.837595 kubelet[2615]: E1112 20:59:36.837574 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.837595 kubelet[2615]: W1112 20:59:36.837589 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.837595 kubelet[2615]: E1112 20:59:36.837602 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.873146 kubelet[2615]: E1112 20:59:36.873112 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:36.873658 containerd[1475]: time="2024-11-12T20:59:36.873614962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-799cc47688-kzpgt,Uid:e5cc8baa-618e-4c3a-ba92-e4313bf56274,Namespace:calico-system,Attempt:0,}" Nov 12 20:59:36.896698 containerd[1475]: time="2024-11-12T20:59:36.896604406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:59:36.896698 containerd[1475]: time="2024-11-12T20:59:36.896667145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:59:36.896964 containerd[1475]: time="2024-11-12T20:59:36.896680580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:36.896964 containerd[1475]: time="2024-11-12T20:59:36.896849428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:36.919100 systemd[1]: Started cri-containerd-27e7aa4d2a67e5e7fc00592a09dd283374ad33fce545fac934adf964b0c04cef.scope - libcontainer container 27e7aa4d2a67e5e7fc00592a09dd283374ad33fce545fac934adf964b0c04cef. Nov 12 20:59:36.927990 kubelet[2615]: E1112 20:59:36.927948 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:36.928413 containerd[1475]: time="2024-11-12T20:59:36.928371992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-km48h,Uid:005fe702-22ba-4484-8c23-b07bf989ec13,Namespace:calico-system,Attempt:0,}" Nov 12 20:59:36.930327 kubelet[2615]: E1112 20:59:36.930307 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.930327 kubelet[2615]: W1112 20:59:36.930325 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.930403 kubelet[2615]: E1112 20:59:36.930355 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.930682 kubelet[2615]: E1112 20:59:36.930662 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.930682 kubelet[2615]: W1112 20:59:36.930679 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.930750 kubelet[2615]: E1112 20:59:36.930704 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.931210 kubelet[2615]: E1112 20:59:36.931163 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.931210 kubelet[2615]: W1112 20:59:36.931188 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.931290 kubelet[2615]: E1112 20:59:36.931219 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.931762 kubelet[2615]: E1112 20:59:36.931699 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.931762 kubelet[2615]: W1112 20:59:36.931743 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.931762 kubelet[2615]: E1112 20:59:36.931756 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.932266 kubelet[2615]: E1112 20:59:36.932241 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.932266 kubelet[2615]: W1112 20:59:36.932257 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.932866 kubelet[2615]: E1112 20:59:36.932335 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.932866 kubelet[2615]: E1112 20:59:36.932613 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.932866 kubelet[2615]: W1112 20:59:36.932632 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.932866 kubelet[2615]: E1112 20:59:36.932706 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.933277 kubelet[2615]: E1112 20:59:36.933256 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.933277 kubelet[2615]: W1112 20:59:36.933269 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.933684 kubelet[2615]: E1112 20:59:36.933513 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.933894 kubelet[2615]: E1112 20:59:36.933876 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.933894 kubelet[2615]: W1112 20:59:36.933890 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.933990 kubelet[2615]: E1112 20:59:36.933952 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.934349 kubelet[2615]: E1112 20:59:36.934314 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.934349 kubelet[2615]: W1112 20:59:36.934327 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.934434 kubelet[2615]: E1112 20:59:36.934405 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.934657 kubelet[2615]: E1112 20:59:36.934632 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.934657 kubelet[2615]: W1112 20:59:36.934647 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.934724 kubelet[2615]: E1112 20:59:36.934668 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.935124 kubelet[2615]: E1112 20:59:36.934953 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.935124 kubelet[2615]: W1112 20:59:36.934992 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.935124 kubelet[2615]: E1112 20:59:36.935040 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.935822 kubelet[2615]: E1112 20:59:36.935250 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.935822 kubelet[2615]: W1112 20:59:36.935258 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.935822 kubelet[2615]: E1112 20:59:36.935288 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.935822 kubelet[2615]: E1112 20:59:36.935532 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.935822 kubelet[2615]: W1112 20:59:36.935541 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.935822 kubelet[2615]: E1112 20:59:36.935560 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.935948 kubelet[2615]: E1112 20:59:36.935849 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.935948 kubelet[2615]: W1112 20:59:36.935859 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.936037 kubelet[2615]: E1112 20:59:36.935961 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.936121 kubelet[2615]: E1112 20:59:36.936104 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.936121 kubelet[2615]: W1112 20:59:36.936117 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.936280 kubelet[2615]: E1112 20:59:36.936198 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.936310 kubelet[2615]: E1112 20:59:36.936301 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.936310 kubelet[2615]: W1112 20:59:36.936308 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.936483 kubelet[2615]: E1112 20:59:36.936396 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.936533 kubelet[2615]: E1112 20:59:36.936518 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.936533 kubelet[2615]: W1112 20:59:36.936528 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.936684 kubelet[2615]: E1112 20:59:36.936655 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.936749 kubelet[2615]: E1112 20:59:36.936735 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.936749 kubelet[2615]: W1112 20:59:36.936745 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.936799 kubelet[2615]: E1112 20:59:36.936769 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.937029 kubelet[2615]: E1112 20:59:36.937014 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.937029 kubelet[2615]: W1112 20:59:36.937025 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.937078 kubelet[2615]: E1112 20:59:36.937046 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.937313 kubelet[2615]: E1112 20:59:36.937298 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.937313 kubelet[2615]: W1112 20:59:36.937309 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.937378 kubelet[2615]: E1112 20:59:36.937324 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.937641 kubelet[2615]: E1112 20:59:36.937591 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.937665 kubelet[2615]: W1112 20:59:36.937643 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.937690 kubelet[2615]: E1112 20:59:36.937663 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.938017 kubelet[2615]: E1112 20:59:36.937958 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.938058 kubelet[2615]: W1112 20:59:36.938023 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.938058 kubelet[2615]: E1112 20:59:36.938052 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.938298 kubelet[2615]: E1112 20:59:36.938275 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.938298 kubelet[2615]: W1112 20:59:36.938287 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.938360 kubelet[2615]: E1112 20:59:36.938315 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.938628 kubelet[2615]: E1112 20:59:36.938605 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.938628 kubelet[2615]: W1112 20:59:36.938621 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.938758 kubelet[2615]: E1112 20:59:36.938742 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.938932 kubelet[2615]: E1112 20:59:36.938909 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.938932 kubelet[2615]: W1112 20:59:36.938923 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.938990 kubelet[2615]: E1112 20:59:36.938934 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.946775 kubelet[2615]: E1112 20:59:36.946735 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:36.946775 kubelet[2615]: W1112 20:59:36.946758 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:36.946775 kubelet[2615]: E1112 20:59:36.946780 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:36.956136 containerd[1475]: time="2024-11-12T20:59:36.955524285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:59:36.956136 containerd[1475]: time="2024-11-12T20:59:36.955581995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:59:36.956136 containerd[1475]: time="2024-11-12T20:59:36.955595841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:36.956136 containerd[1475]: time="2024-11-12T20:59:36.955737227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:59:36.967723 containerd[1475]: time="2024-11-12T20:59:36.967637467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-799cc47688-kzpgt,Uid:e5cc8baa-618e-4c3a-ba92-e4313bf56274,Namespace:calico-system,Attempt:0,} returns sandbox id \"27e7aa4d2a67e5e7fc00592a09dd283374ad33fce545fac934adf964b0c04cef\"" Nov 12 20:59:36.972021 kubelet[2615]: E1112 20:59:36.971618 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:36.977259 containerd[1475]: time="2024-11-12T20:59:36.977216637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:59:36.980168 systemd[1]: Started cri-containerd-4d74aa2ef150c858b65d8a5166d230aeefff161dc8e884f8ec4c56e5cee54124.scope - libcontainer container 4d74aa2ef150c858b65d8a5166d230aeefff161dc8e884f8ec4c56e5cee54124. Nov 12 20:59:37.002840 containerd[1475]: time="2024-11-12T20:59:37.002798232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-km48h,Uid:005fe702-22ba-4484-8c23-b07bf989ec13,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d74aa2ef150c858b65d8a5166d230aeefff161dc8e884f8ec4c56e5cee54124\"" Nov 12 20:59:37.003528 kubelet[2615]: E1112 20:59:37.003503 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:38.280989 kubelet[2615]: E1112 20:59:38.280952 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2vmjh" podUID="7323a1ea-5ba5-4a75-b521-01e3f15f8119" Nov 12 20:59:38.811320 containerd[1475]: time="2024-11-12T20:59:38.811259519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:38.812069 containerd[1475]: time="2024-11-12T20:59:38.812018567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:59:38.813216 containerd[1475]: time="2024-11-12T20:59:38.813186557Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:38.815122 containerd[1475]: time="2024-11-12T20:59:38.815090099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:38.815647 containerd[1475]: time="2024-11-12T20:59:38.815606842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 1.838349889s" Nov 12 20:59:38.815647 containerd[1475]: time="2024-11-12T20:59:38.815642921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:59:38.818146 containerd[1475]: time="2024-11-12T20:59:38.818109463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:59:38.828894 containerd[1475]: time="2024-11-12T20:59:38.828826157Z" level=info msg="CreateContainer within sandbox \"27e7aa4d2a67e5e7fc00592a09dd283374ad33fce545fac934adf964b0c04cef\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:59:38.844706 containerd[1475]: time="2024-11-12T20:59:38.844670924Z" level=info msg="CreateContainer within sandbox \"27e7aa4d2a67e5e7fc00592a09dd283374ad33fce545fac934adf964b0c04cef\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f5353709bd37a5db094d230bd1ca2cde7ed9530e4b436b058e28b16c72a2e0e3\"" Nov 12 20:59:38.846018 containerd[1475]: time="2024-11-12T20:59:38.845072599Z" level=info msg="StartContainer for \"f5353709bd37a5db094d230bd1ca2cde7ed9530e4b436b058e28b16c72a2e0e3\"" Nov 12 20:59:38.876093 systemd[1]: Started cri-containerd-f5353709bd37a5db094d230bd1ca2cde7ed9530e4b436b058e28b16c72a2e0e3.scope - libcontainer container f5353709bd37a5db094d230bd1ca2cde7ed9530e4b436b058e28b16c72a2e0e3. Nov 12 20:59:38.914276 containerd[1475]: time="2024-11-12T20:59:38.914187334Z" level=info msg="StartContainer for \"f5353709bd37a5db094d230bd1ca2cde7ed9530e4b436b058e28b16c72a2e0e3\" returns successfully" Nov 12 20:59:39.335833 kubelet[2615]: E1112 20:59:39.335801 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:39.351698 kubelet[2615]: I1112 20:59:39.351670 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-799cc47688-kzpgt" podStartSLOduration=1.506835105 podStartE2EDuration="3.351629107s" podCreationTimestamp="2024-11-12 20:59:36 +0000 UTC" firstStartedPulling="2024-11-12 20:59:36.973127958 +0000 UTC m=+23.784251209" lastFinishedPulling="2024-11-12 20:59:38.81792196 +0000 UTC m=+25.629045211" observedRunningTime="2024-11-12 20:59:39.350805788 +0000 UTC m=+26.161929039" watchObservedRunningTime="2024-11-12 20:59:39.351629107 +0000 UTC m=+26.162752358" Nov 12 20:59:39.434906 kubelet[2615]: E1112 20:59:39.434875 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.434906 kubelet[2615]: W1112 20:59:39.434898 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.435010 kubelet[2615]: E1112 20:59:39.434920 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.435209 kubelet[2615]: E1112 20:59:39.435192 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.435209 kubelet[2615]: W1112 20:59:39.435203 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.435209 kubelet[2615]: E1112 20:59:39.435215 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.435444 kubelet[2615]: E1112 20:59:39.435423 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.435444 kubelet[2615]: W1112 20:59:39.435434 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.435444 kubelet[2615]: E1112 20:59:39.435444 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.435686 kubelet[2615]: E1112 20:59:39.435668 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.435716 kubelet[2615]: W1112 20:59:39.435686 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.435716 kubelet[2615]: E1112 20:59:39.435707 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.435954 kubelet[2615]: E1112 20:59:39.435931 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.435954 kubelet[2615]: W1112 20:59:39.435941 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.435954 kubelet[2615]: E1112 20:59:39.435952 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.436205 kubelet[2615]: E1112 20:59:39.436192 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.436205 kubelet[2615]: W1112 20:59:39.436202 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.436265 kubelet[2615]: E1112 20:59:39.436213 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.436441 kubelet[2615]: E1112 20:59:39.436426 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.436441 kubelet[2615]: W1112 20:59:39.436439 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.436505 kubelet[2615]: E1112 20:59:39.436451 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.436646 kubelet[2615]: E1112 20:59:39.436632 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.436646 kubelet[2615]: W1112 20:59:39.436642 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.436712 kubelet[2615]: E1112 20:59:39.436663 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.436911 kubelet[2615]: E1112 20:59:39.436897 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.436911 kubelet[2615]: W1112 20:59:39.436907 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.436985 kubelet[2615]: E1112 20:59:39.436917 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.437126 kubelet[2615]: E1112 20:59:39.437112 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.437126 kubelet[2615]: W1112 20:59:39.437123 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.437192 kubelet[2615]: E1112 20:59:39.437134 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.437340 kubelet[2615]: E1112 20:59:39.437318 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.437340 kubelet[2615]: W1112 20:59:39.437329 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.437340 kubelet[2615]: E1112 20:59:39.437338 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.437519 kubelet[2615]: E1112 20:59:39.437506 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.437519 kubelet[2615]: W1112 20:59:39.437515 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.437570 kubelet[2615]: E1112 20:59:39.437525 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.437703 kubelet[2615]: E1112 20:59:39.437690 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.437703 kubelet[2615]: W1112 20:59:39.437699 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.437747 kubelet[2615]: E1112 20:59:39.437710 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.437886 kubelet[2615]: E1112 20:59:39.437872 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.437886 kubelet[2615]: W1112 20:59:39.437882 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.437941 kubelet[2615]: E1112 20:59:39.437892 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.438096 kubelet[2615]: E1112 20:59:39.438082 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.438096 kubelet[2615]: W1112 20:59:39.438093 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.438144 kubelet[2615]: E1112 20:59:39.438102 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.453483 kubelet[2615]: E1112 20:59:39.453460 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.453483 kubelet[2615]: W1112 20:59:39.453473 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.453483 kubelet[2615]: E1112 20:59:39.453486 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.453717 kubelet[2615]: E1112 20:59:39.453693 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.453717 kubelet[2615]: W1112 20:59:39.453708 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.453761 kubelet[2615]: E1112 20:59:39.453725 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.453963 kubelet[2615]: E1112 20:59:39.453947 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.453963 kubelet[2615]: W1112 20:59:39.453958 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.454042 kubelet[2615]: E1112 20:59:39.453984 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.454223 kubelet[2615]: E1112 20:59:39.454200 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.454223 kubelet[2615]: W1112 20:59:39.454214 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.454273 kubelet[2615]: E1112 20:59:39.454232 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.454451 kubelet[2615]: E1112 20:59:39.454431 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.454451 kubelet[2615]: W1112 20:59:39.454442 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.454507 kubelet[2615]: E1112 20:59:39.454457 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.454646 kubelet[2615]: E1112 20:59:39.454632 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.454646 kubelet[2615]: W1112 20:59:39.454642 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.454813 kubelet[2615]: E1112 20:59:39.454658 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.454863 kubelet[2615]: E1112 20:59:39.454850 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.454863 kubelet[2615]: W1112 20:59:39.454859 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.454913 kubelet[2615]: E1112 20:59:39.454875 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.455098 kubelet[2615]: E1112 20:59:39.455081 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.455098 kubelet[2615]: W1112 20:59:39.455093 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.455154 kubelet[2615]: E1112 20:59:39.455111 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.455317 kubelet[2615]: E1112 20:59:39.455303 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.455317 kubelet[2615]: W1112 20:59:39.455313 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.455371 kubelet[2615]: E1112 20:59:39.455342 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.455512 kubelet[2615]: E1112 20:59:39.455498 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.455512 kubelet[2615]: W1112 20:59:39.455508 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.455556 kubelet[2615]: E1112 20:59:39.455536 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.455693 kubelet[2615]: E1112 20:59:39.455679 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.455693 kubelet[2615]: W1112 20:59:39.455689 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.455745 kubelet[2615]: E1112 20:59:39.455704 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.455913 kubelet[2615]: E1112 20:59:39.455898 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.455913 kubelet[2615]: W1112 20:59:39.455909 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.455989 kubelet[2615]: E1112 20:59:39.455926 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.456148 kubelet[2615]: E1112 20:59:39.456135 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.456148 kubelet[2615]: W1112 20:59:39.456144 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.456196 kubelet[2615]: E1112 20:59:39.456159 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.456345 kubelet[2615]: E1112 20:59:39.456327 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.456345 kubelet[2615]: W1112 20:59:39.456340 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.456396 kubelet[2615]: E1112 20:59:39.456357 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.456555 kubelet[2615]: E1112 20:59:39.456541 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.456555 kubelet[2615]: W1112 20:59:39.456551 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.456600 kubelet[2615]: E1112 20:59:39.456571 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.456769 kubelet[2615]: E1112 20:59:39.456755 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.456769 kubelet[2615]: W1112 20:59:39.456765 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.456817 kubelet[2615]: E1112 20:59:39.456778 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.457066 kubelet[2615]: E1112 20:59:39.457049 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.457066 kubelet[2615]: W1112 20:59:39.457061 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.457129 kubelet[2615]: E1112 20:59:39.457078 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:39.457280 kubelet[2615]: E1112 20:59:39.457266 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:59:39.457280 kubelet[2615]: W1112 20:59:39.457276 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:59:39.457333 kubelet[2615]: E1112 20:59:39.457286 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:59:40.162163 containerd[1475]: time="2024-11-12T20:59:40.162113516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:40.163112 containerd[1475]: time="2024-11-12T20:59:40.163072761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:59:40.164655 containerd[1475]: time="2024-11-12T20:59:40.164604474Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:40.166640 containerd[1475]: time="2024-11-12T20:59:40.166616249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:40.167158 containerd[1475]: time="2024-11-12T20:59:40.167129905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.348986418s" Nov 12 20:59:40.167158 containerd[1475]: time="2024-11-12T20:59:40.167157858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:59:40.169909 containerd[1475]: time="2024-11-12T20:59:40.169870653Z" level=info msg="CreateContainer within sandbox \"4d74aa2ef150c858b65d8a5166d230aeefff161dc8e884f8ec4c56e5cee54124\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:59:40.183104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395297840.mount: Deactivated successfully. Nov 12 20:59:40.185621 containerd[1475]: time="2024-11-12T20:59:40.185586266Z" level=info msg="CreateContainer within sandbox \"4d74aa2ef150c858b65d8a5166d230aeefff161dc8e884f8ec4c56e5cee54124\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4583e51b0a6bd7e6156d72e6037b9f735129080d66d46b8b26c87ab6147d8c0d\"" Nov 12 20:59:40.186140 containerd[1475]: time="2024-11-12T20:59:40.186045220Z" level=info msg="StartContainer for \"4583e51b0a6bd7e6156d72e6037b9f735129080d66d46b8b26c87ab6147d8c0d\"" Nov 12 20:59:40.216090 systemd[1]: Started cri-containerd-4583e51b0a6bd7e6156d72e6037b9f735129080d66d46b8b26c87ab6147d8c0d.scope - libcontainer container 4583e51b0a6bd7e6156d72e6037b9f735129080d66d46b8b26c87ab6147d8c0d. Nov 12 20:59:40.243622 containerd[1475]: time="2024-11-12T20:59:40.243551793Z" level=info msg="StartContainer for \"4583e51b0a6bd7e6156d72e6037b9f735129080d66d46b8b26c87ab6147d8c0d\" returns successfully" Nov 12 20:59:40.256039 systemd[1]: cri-containerd-4583e51b0a6bd7e6156d72e6037b9f735129080d66d46b8b26c87ab6147d8c0d.scope: Deactivated successfully. Nov 12 20:59:40.281288 kubelet[2615]: E1112 20:59:40.281222 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2vmjh" podUID="7323a1ea-5ba5-4a75-b521-01e3f15f8119" Nov 12 20:59:40.539218 kubelet[2615]: I1112 20:59:40.539031 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:59:40.539721 kubelet[2615]: E1112 20:59:40.539630 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:40.541080 kubelet[2615]: E1112 20:59:40.540189 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:40.822766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4583e51b0a6bd7e6156d72e6037b9f735129080d66d46b8b26c87ab6147d8c0d-rootfs.mount: Deactivated successfully. Nov 12 20:59:40.976838 containerd[1475]: time="2024-11-12T20:59:40.974223239Z" level=info msg="shim disconnected" id=4583e51b0a6bd7e6156d72e6037b9f735129080d66d46b8b26c87ab6147d8c0d namespace=k8s.io Nov 12 20:59:40.977024 containerd[1475]: time="2024-11-12T20:59:40.976839782Z" level=warning msg="cleaning up after shim disconnected" id=4583e51b0a6bd7e6156d72e6037b9f735129080d66d46b8b26c87ab6147d8c0d namespace=k8s.io Nov 12 20:59:40.977024 containerd[1475]: time="2024-11-12T20:59:40.976859790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:59:41.542468 kubelet[2615]: E1112 20:59:41.542424 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:41.543879 containerd[1475]: time="2024-11-12T20:59:41.543644271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:59:42.281209 kubelet[2615]: E1112 20:59:42.281164 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2vmjh" podUID="7323a1ea-5ba5-4a75-b521-01e3f15f8119" Nov 12 20:59:43.474984 systemd[1]: Started sshd@9-10.0.0.160:22-10.0.0.1:48128.service - OpenSSH per-connection server daemon (10.0.0.1:48128). Nov 12 20:59:43.512061 sshd[3285]: Accepted publickey for core from 10.0.0.1 port 48128 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:59:43.513673 sshd[3285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:43.517829 systemd-logind[1457]: New session 10 of user core. Nov 12 20:59:43.523117 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:59:43.632706 sshd[3285]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:43.636783 systemd[1]: sshd@9-10.0.0.160:22-10.0.0.1:48128.service: Deactivated successfully. Nov 12 20:59:43.638596 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:59:43.639269 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:59:43.640168 systemd-logind[1457]: Removed session 10. Nov 12 20:59:44.281036 kubelet[2615]: E1112 20:59:44.281000 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2vmjh" podUID="7323a1ea-5ba5-4a75-b521-01e3f15f8119" Nov 12 20:59:46.280914 kubelet[2615]: E1112 20:59:46.280875 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2vmjh" podUID="7323a1ea-5ba5-4a75-b521-01e3f15f8119" Nov 12 20:59:46.354589 containerd[1475]: time="2024-11-12T20:59:46.354545611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:46.355643 containerd[1475]: time="2024-11-12T20:59:46.355440904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:59:46.356743 containerd[1475]: time="2024-11-12T20:59:46.356700012Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:46.358728 containerd[1475]: time="2024-11-12T20:59:46.358658674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:46.359340 containerd[1475]: time="2024-11-12T20:59:46.359312363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 4.815629189s" Nov 12 20:59:46.359410 containerd[1475]: time="2024-11-12T20:59:46.359340636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:59:46.361294 containerd[1475]: time="2024-11-12T20:59:46.361270314Z" level=info msg="CreateContainer within sandbox \"4d74aa2ef150c858b65d8a5166d230aeefff161dc8e884f8ec4c56e5cee54124\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:59:46.380218 containerd[1475]: time="2024-11-12T20:59:46.380168185Z" level=info msg="CreateContainer within sandbox \"4d74aa2ef150c858b65d8a5166d230aeefff161dc8e884f8ec4c56e5cee54124\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"73a94840d4971653c5c0af1d2b6a998202734e0956b457f4a4a930aa7313f565\"" Nov 12 20:59:46.381124 containerd[1475]: time="2024-11-12T20:59:46.380653388Z" level=info msg="StartContainer for \"73a94840d4971653c5c0af1d2b6a998202734e0956b457f4a4a930aa7313f565\"" Nov 12 20:59:46.415106 systemd[1]: Started cri-containerd-73a94840d4971653c5c0af1d2b6a998202734e0956b457f4a4a930aa7313f565.scope - libcontainer container 73a94840d4971653c5c0af1d2b6a998202734e0956b457f4a4a930aa7313f565. Nov 12 20:59:46.684373 containerd[1475]: time="2024-11-12T20:59:46.684104174Z" level=info msg="StartContainer for \"73a94840d4971653c5c0af1d2b6a998202734e0956b457f4a4a930aa7313f565\" returns successfully" Nov 12 20:59:47.688647 kubelet[2615]: E1112 20:59:47.688618 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:47.860415 containerd[1475]: time="2024-11-12T20:59:47.860369259Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:59:47.862932 systemd[1]: cri-containerd-73a94840d4971653c5c0af1d2b6a998202734e0956b457f4a4a930aa7313f565.scope: Deactivated successfully. Nov 12 20:59:47.882223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73a94840d4971653c5c0af1d2b6a998202734e0956b457f4a4a930aa7313f565-rootfs.mount: Deactivated successfully. Nov 12 20:59:47.900682 kubelet[2615]: I1112 20:59:47.900658 2615 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:59:47.943762 containerd[1475]: time="2024-11-12T20:59:47.943392829Z" level=info msg="shim disconnected" id=73a94840d4971653c5c0af1d2b6a998202734e0956b457f4a4a930aa7313f565 namespace=k8s.io Nov 12 20:59:47.943762 containerd[1475]: time="2024-11-12T20:59:47.943466698Z" level=warning msg="cleaning up after shim disconnected" id=73a94840d4971653c5c0af1d2b6a998202734e0956b457f4a4a930aa7313f565 namespace=k8s.io Nov 12 20:59:47.943762 containerd[1475]: time="2024-11-12T20:59:47.943478530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:59:47.947006 kubelet[2615]: I1112 20:59:47.945725 2615 topology_manager.go:215] "Topology Admit Handler" podUID="a46afc74-9c76-46ea-bc9f-639441ca0037" podNamespace="kube-system" podName="coredns-76f75df574-gj9vk" Nov 12 20:59:47.949240 kubelet[2615]: I1112 20:59:47.949216 2615 topology_manager.go:215] "Topology Admit Handler" podUID="e2c54d21-3c5b-4cad-88df-27707d237df7" podNamespace="kube-system" podName="coredns-76f75df574-9qn2s" Nov 12 20:59:47.950107 kubelet[2615]: I1112 20:59:47.950088 2615 topology_manager.go:215] "Topology Admit Handler" podUID="de27d36a-201e-4191-8379-e6120ae4db51" podNamespace="calico-apiserver" podName="calico-apiserver-7dfb9bbd9d-zxsdt" Nov 12 20:59:47.951120 kubelet[2615]: I1112 20:59:47.951094 2615 topology_manager.go:215] "Topology Admit Handler" podUID="e456c18a-3106-4a95-8cb0-7da002cc0d2d" podNamespace="calico-system" podName="calico-kube-controllers-7b4bd556db-x6tzm" Nov 12 20:59:47.951360 kubelet[2615]: I1112 20:59:47.951322 2615 topology_manager.go:215] "Topology Admit Handler" podUID="3b947df0-74b2-4106-85b7-80347fe0a3b9" podNamespace="calico-apiserver" podName="calico-apiserver-7dfb9bbd9d-f7tm4" Nov 12 20:59:47.963870 systemd[1]: Created slice kubepods-burstable-poda46afc74_9c76_46ea_bc9f_639441ca0037.slice - libcontainer container kubepods-burstable-poda46afc74_9c76_46ea_bc9f_639441ca0037.slice. Nov 12 20:59:47.970882 systemd[1]: Created slice kubepods-burstable-pode2c54d21_3c5b_4cad_88df_27707d237df7.slice - libcontainer container kubepods-burstable-pode2c54d21_3c5b_4cad_88df_27707d237df7.slice. Nov 12 20:59:47.976920 systemd[1]: Created slice kubepods-besteffort-podde27d36a_201e_4191_8379_e6120ae4db51.slice - libcontainer container kubepods-besteffort-podde27d36a_201e_4191_8379_e6120ae4db51.slice. Nov 12 20:59:47.986486 systemd[1]: Created slice kubepods-besteffort-pode456c18a_3106_4a95_8cb0_7da002cc0d2d.slice - libcontainer container kubepods-besteffort-pode456c18a_3106_4a95_8cb0_7da002cc0d2d.slice. Nov 12 20:59:47.992158 systemd[1]: Created slice kubepods-besteffort-pod3b947df0_74b2_4106_85b7_80347fe0a3b9.slice - libcontainer container kubepods-besteffort-pod3b947df0_74b2_4106_85b7_80347fe0a3b9.slice. Nov 12 20:59:48.015852 kubelet[2615]: I1112 20:59:48.015814 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wzrh\" (UniqueName: \"kubernetes.io/projected/3b947df0-74b2-4106-85b7-80347fe0a3b9-kube-api-access-8wzrh\") pod \"calico-apiserver-7dfb9bbd9d-f7tm4\" (UID: \"3b947df0-74b2-4106-85b7-80347fe0a3b9\") " pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-f7tm4" Nov 12 20:59:48.015852 kubelet[2615]: I1112 20:59:48.015855 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g678\" (UniqueName: \"kubernetes.io/projected/e2c54d21-3c5b-4cad-88df-27707d237df7-kube-api-access-2g678\") pod \"coredns-76f75df574-9qn2s\" (UID: \"e2c54d21-3c5b-4cad-88df-27707d237df7\") " pod="kube-system/coredns-76f75df574-9qn2s" Nov 12 20:59:48.016007 kubelet[2615]: I1112 20:59:48.015914 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de27d36a-201e-4191-8379-e6120ae4db51-calico-apiserver-certs\") pod \"calico-apiserver-7dfb9bbd9d-zxsdt\" (UID: \"de27d36a-201e-4191-8379-e6120ae4db51\") " pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-zxsdt" Nov 12 20:59:48.016007 kubelet[2615]: I1112 20:59:48.015934 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2c54d21-3c5b-4cad-88df-27707d237df7-config-volume\") pod \"coredns-76f75df574-9qn2s\" (UID: \"e2c54d21-3c5b-4cad-88df-27707d237df7\") " pod="kube-system/coredns-76f75df574-9qn2s" Nov 12 20:59:48.016007 kubelet[2615]: I1112 20:59:48.015954 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e456c18a-3106-4a95-8cb0-7da002cc0d2d-tigera-ca-bundle\") pod \"calico-kube-controllers-7b4bd556db-x6tzm\" (UID: \"e456c18a-3106-4a95-8cb0-7da002cc0d2d\") " pod="calico-system/calico-kube-controllers-7b4bd556db-x6tzm" Nov 12 20:59:48.016127 kubelet[2615]: I1112 20:59:48.016023 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8hn7\" (UniqueName: \"kubernetes.io/projected/e456c18a-3106-4a95-8cb0-7da002cc0d2d-kube-api-access-n8hn7\") pod \"calico-kube-controllers-7b4bd556db-x6tzm\" (UID: \"e456c18a-3106-4a95-8cb0-7da002cc0d2d\") " pod="calico-system/calico-kube-controllers-7b4bd556db-x6tzm" Nov 12 20:59:48.016127 kubelet[2615]: I1112 20:59:48.016060 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n82f\" (UniqueName: \"kubernetes.io/projected/a46afc74-9c76-46ea-bc9f-639441ca0037-kube-api-access-9n82f\") pod \"coredns-76f75df574-gj9vk\" (UID: \"a46afc74-9c76-46ea-bc9f-639441ca0037\") " pod="kube-system/coredns-76f75df574-gj9vk" Nov 12 20:59:48.016127 kubelet[2615]: I1112 20:59:48.016081 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75k8z\" (UniqueName: \"kubernetes.io/projected/de27d36a-201e-4191-8379-e6120ae4db51-kube-api-access-75k8z\") pod \"calico-apiserver-7dfb9bbd9d-zxsdt\" (UID: \"de27d36a-201e-4191-8379-e6120ae4db51\") " pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-zxsdt" Nov 12 20:59:48.016127 kubelet[2615]: I1112 20:59:48.016123 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b947df0-74b2-4106-85b7-80347fe0a3b9-calico-apiserver-certs\") pod \"calico-apiserver-7dfb9bbd9d-f7tm4\" (UID: \"3b947df0-74b2-4106-85b7-80347fe0a3b9\") " pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-f7tm4" Nov 12 20:59:48.016252 kubelet[2615]: I1112 20:59:48.016148 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a46afc74-9c76-46ea-bc9f-639441ca0037-config-volume\") pod \"coredns-76f75df574-gj9vk\" (UID: \"a46afc74-9c76-46ea-bc9f-639441ca0037\") " pod="kube-system/coredns-76f75df574-gj9vk" Nov 12 20:59:48.267249 kubelet[2615]: E1112 20:59:48.267224 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:48.267952 containerd[1475]: time="2024-11-12T20:59:48.267913177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gj9vk,Uid:a46afc74-9c76-46ea-bc9f-639441ca0037,Namespace:kube-system,Attempt:0,}" Nov 12 20:59:48.273962 kubelet[2615]: E1112 20:59:48.273935 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:48.274318 containerd[1475]: time="2024-11-12T20:59:48.274291838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9qn2s,Uid:e2c54d21-3c5b-4cad-88df-27707d237df7,Namespace:kube-system,Attempt:0,}" Nov 12 20:59:48.280485 containerd[1475]: time="2024-11-12T20:59:48.280409098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfb9bbd9d-zxsdt,Uid:de27d36a-201e-4191-8379-e6120ae4db51,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:59:48.289839 systemd[1]: Created slice kubepods-besteffort-pod7323a1ea_5ba5_4a75_b521_01e3f15f8119.slice - libcontainer container kubepods-besteffort-pod7323a1ea_5ba5_4a75_b521_01e3f15f8119.slice. Nov 12 20:59:48.291054 containerd[1475]: time="2024-11-12T20:59:48.290817071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4bd556db-x6tzm,Uid:e456c18a-3106-4a95-8cb0-7da002cc0d2d,Namespace:calico-system,Attempt:0,}" Nov 12 20:59:48.292487 containerd[1475]: time="2024-11-12T20:59:48.292453086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2vmjh,Uid:7323a1ea-5ba5-4a75-b521-01e3f15f8119,Namespace:calico-system,Attempt:0,}" Nov 12 20:59:48.295079 containerd[1475]: time="2024-11-12T20:59:48.295027596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfb9bbd9d-f7tm4,Uid:3b947df0-74b2-4106-85b7-80347fe0a3b9,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:59:48.359030 containerd[1475]: time="2024-11-12T20:59:48.358990050Z" level=error msg="Failed to destroy network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.359655 containerd[1475]: time="2024-11-12T20:59:48.359543300Z" level=error msg="encountered an error cleaning up failed sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.359655 containerd[1475]: time="2024-11-12T20:59:48.359617329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9qn2s,Uid:e2c54d21-3c5b-4cad-88df-27707d237df7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.360221 kubelet[2615]: E1112 20:59:48.360029 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.360221 kubelet[2615]: E1112 20:59:48.360086 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9qn2s" Nov 12 20:59:48.360221 kubelet[2615]: E1112 20:59:48.360114 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9qn2s" Nov 12 20:59:48.360334 kubelet[2615]: E1112 20:59:48.360177 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-9qn2s_kube-system(e2c54d21-3c5b-4cad-88df-27707d237df7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-9qn2s_kube-system(e2c54d21-3c5b-4cad-88df-27707d237df7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9qn2s" podUID="e2c54d21-3c5b-4cad-88df-27707d237df7" Nov 12 20:59:48.365688 containerd[1475]: time="2024-11-12T20:59:48.365628489Z" level=error msg="Failed to destroy network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.366231 containerd[1475]: time="2024-11-12T20:59:48.366193992Z" level=error msg="encountered an error cleaning up failed sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.366342 containerd[1475]: time="2024-11-12T20:59:48.366324156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gj9vk,Uid:a46afc74-9c76-46ea-bc9f-639441ca0037,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.366656 kubelet[2615]: E1112 20:59:48.366623 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.366703 kubelet[2615]: E1112 20:59:48.366674 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gj9vk" Nov 12 20:59:48.366703 kubelet[2615]: E1112 20:59:48.366692 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gj9vk" Nov 12 20:59:48.366759 kubelet[2615]: E1112 20:59:48.366738 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gj9vk_kube-system(a46afc74-9c76-46ea-bc9f-639441ca0037)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gj9vk_kube-system(a46afc74-9c76-46ea-bc9f-639441ca0037)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gj9vk" podUID="a46afc74-9c76-46ea-bc9f-639441ca0037" Nov 12 20:59:48.398892 containerd[1475]: time="2024-11-12T20:59:48.398755719Z" level=error msg="Failed to destroy network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.399905 containerd[1475]: time="2024-11-12T20:59:48.399883099Z" level=error msg="encountered an error cleaning up failed sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.400227 containerd[1475]: time="2024-11-12T20:59:48.400205695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4bd556db-x6tzm,Uid:e456c18a-3106-4a95-8cb0-7da002cc0d2d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.400532 kubelet[2615]: E1112 20:59:48.400510 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.400611 kubelet[2615]: E1112 20:59:48.400562 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4bd556db-x6tzm" Nov 12 20:59:48.400611 kubelet[2615]: E1112 20:59:48.400580 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4bd556db-x6tzm" Nov 12 20:59:48.400761 kubelet[2615]: E1112 20:59:48.400634 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b4bd556db-x6tzm_calico-system(e456c18a-3106-4a95-8cb0-7da002cc0d2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b4bd556db-x6tzm_calico-system(e456c18a-3106-4a95-8cb0-7da002cc0d2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b4bd556db-x6tzm" podUID="e456c18a-3106-4a95-8cb0-7da002cc0d2d" Nov 12 20:59:48.402331 containerd[1475]: time="2024-11-12T20:59:48.402298008Z" level=error msg="Failed to destroy network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.402891 containerd[1475]: time="2024-11-12T20:59:48.402802646Z" level=error msg="encountered an error cleaning up failed sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.402935 containerd[1475]: time="2024-11-12T20:59:48.402878318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfb9bbd9d-zxsdt,Uid:de27d36a-201e-4191-8379-e6120ae4db51,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.403169 kubelet[2615]: E1112 20:59:48.403095 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.403169 kubelet[2615]: E1112 20:59:48.403147 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-zxsdt" Nov 12 20:59:48.403169 kubelet[2615]: E1112 20:59:48.403168 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-zxsdt" Nov 12 20:59:48.403290 kubelet[2615]: E1112 20:59:48.403218 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dfb9bbd9d-zxsdt_calico-apiserver(de27d36a-201e-4191-8379-e6120ae4db51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dfb9bbd9d-zxsdt_calico-apiserver(de27d36a-201e-4191-8379-e6120ae4db51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-zxsdt" podUID="de27d36a-201e-4191-8379-e6120ae4db51" Nov 12 20:59:48.420150 containerd[1475]: time="2024-11-12T20:59:48.420089993Z" level=error msg="Failed to destroy network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.420555 containerd[1475]: time="2024-11-12T20:59:48.420527375Z" level=error msg="Failed to destroy network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.420649 containerd[1475]: time="2024-11-12T20:59:48.420528256Z" level=error msg="encountered an error cleaning up failed sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.420728 containerd[1475]: time="2024-11-12T20:59:48.420673118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2vmjh,Uid:7323a1ea-5ba5-4a75-b521-01e3f15f8119,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.420875 containerd[1475]: time="2024-11-12T20:59:48.420848148Z" level=error msg="encountered an error cleaning up failed sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.420927 containerd[1475]: time="2024-11-12T20:59:48.420898031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfb9bbd9d-f7tm4,Uid:3b947df0-74b2-4106-85b7-80347fe0a3b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.421037 kubelet[2615]: E1112 20:59:48.420889 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.421037 kubelet[2615]: E1112 20:59:48.420944 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2vmjh" Nov 12 20:59:48.421037 kubelet[2615]: E1112 20:59:48.420977 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2vmjh" Nov 12 20:59:48.421037 kubelet[2615]: E1112 20:59:48.421025 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.421199 kubelet[2615]: E1112 20:59:48.421039 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2vmjh_calico-system(7323a1ea-5ba5-4a75-b521-01e3f15f8119)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2vmjh_calico-system(7323a1ea-5ba5-4a75-b521-01e3f15f8119)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2vmjh" podUID="7323a1ea-5ba5-4a75-b521-01e3f15f8119" Nov 12 20:59:48.421199 kubelet[2615]: E1112 20:59:48.421053 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-f7tm4" Nov 12 20:59:48.421199 kubelet[2615]: E1112 20:59:48.421069 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-f7tm4" Nov 12 20:59:48.421413 kubelet[2615]: E1112 20:59:48.421113 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dfb9bbd9d-f7tm4_calico-apiserver(3b947df0-74b2-4106-85b7-80347fe0a3b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dfb9bbd9d-f7tm4_calico-apiserver(3b947df0-74b2-4106-85b7-80347fe0a3b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-f7tm4" podUID="3b947df0-74b2-4106-85b7-80347fe0a3b9" Nov 12 20:59:48.646848 systemd[1]: Started sshd@10-10.0.0.160:22-10.0.0.1:48920.service - OpenSSH per-connection server daemon (10.0.0.1:48920). Nov 12 20:59:48.685066 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 48920 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:59:48.686623 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:48.690394 systemd-logind[1457]: New session 11 of user core. Nov 12 20:59:48.690672 kubelet[2615]: I1112 20:59:48.690499 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 20:59:48.691788 kubelet[2615]: I1112 20:59:48.691699 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 20:59:48.691854 containerd[1475]: time="2024-11-12T20:59:48.691782868Z" level=info msg="StopPodSandbox for \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\"" Nov 12 20:59:48.692003 containerd[1475]: time="2024-11-12T20:59:48.691980830Z" level=info msg="Ensure that sandbox 5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14 in task-service has been cleanup successfully" Nov 12 20:59:48.692052 containerd[1475]: time="2024-11-12T20:59:48.692007250Z" level=info msg="StopPodSandbox for \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\"" Nov 12 20:59:48.692150 containerd[1475]: time="2024-11-12T20:59:48.692130120Z" level=info msg="Ensure that sandbox bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c in task-service has been cleanup successfully" Nov 12 20:59:48.693212 kubelet[2615]: I1112 20:59:48.693189 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 20:59:48.693995 containerd[1475]: time="2024-11-12T20:59:48.693603400Z" level=info msg="StopPodSandbox for \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\"" Nov 12 20:59:48.693995 containerd[1475]: time="2024-11-12T20:59:48.693767208Z" level=info msg="Ensure that sandbox c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331 in task-service has been cleanup successfully" Nov 12 20:59:48.694495 kubelet[2615]: I1112 20:59:48.694472 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 20:59:48.695334 containerd[1475]: time="2024-11-12T20:59:48.695299328Z" level=info msg="StopPodSandbox for \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\"" Nov 12 20:59:48.695640 containerd[1475]: time="2024-11-12T20:59:48.695613108Z" level=info msg="Ensure that sandbox 3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381 in task-service has been cleanup successfully" Nov 12 20:59:48.696215 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:59:48.696403 kubelet[2615]: I1112 20:59:48.696379 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 20:59:48.697993 containerd[1475]: time="2024-11-12T20:59:48.697329244Z" level=info msg="StopPodSandbox for \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\"" Nov 12 20:59:48.697993 containerd[1475]: time="2024-11-12T20:59:48.697485217Z" level=info msg="Ensure that sandbox b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d in task-service has been cleanup successfully" Nov 12 20:59:48.702715 kubelet[2615]: E1112 20:59:48.702686 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:48.707188 kubelet[2615]: I1112 20:59:48.706863 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 20:59:48.707331 containerd[1475]: time="2024-11-12T20:59:48.706943846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:59:48.708091 containerd[1475]: time="2024-11-12T20:59:48.708071437Z" level=info msg="StopPodSandbox for \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\"" Nov 12 20:59:48.708242 containerd[1475]: time="2024-11-12T20:59:48.708221017Z" level=info msg="Ensure that sandbox 036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d in task-service has been cleanup successfully" Nov 12 20:59:48.758264 containerd[1475]: time="2024-11-12T20:59:48.757316104Z" level=error msg="StopPodSandbox for \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\" failed" error="failed to destroy network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.758375 kubelet[2615]: E1112 20:59:48.757564 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 20:59:48.758375 kubelet[2615]: E1112 20:59:48.757633 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d"} Nov 12 20:59:48.758375 kubelet[2615]: E1112 20:59:48.757665 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b947df0-74b2-4106-85b7-80347fe0a3b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:59:48.758375 kubelet[2615]: E1112 20:59:48.757692 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b947df0-74b2-4106-85b7-80347fe0a3b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-f7tm4" podUID="3b947df0-74b2-4106-85b7-80347fe0a3b9" Nov 12 20:59:48.760023 containerd[1475]: time="2024-11-12T20:59:48.759954935Z" level=error msg="StopPodSandbox for \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\" failed" error="failed to destroy network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.760369 kubelet[2615]: E1112 20:59:48.760348 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 20:59:48.760424 kubelet[2615]: E1112 20:59:48.760387 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381"} Nov 12 20:59:48.761006 kubelet[2615]: E1112 20:59:48.760451 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2c54d21-3c5b-4cad-88df-27707d237df7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:59:48.761006 kubelet[2615]: E1112 20:59:48.760591 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2c54d21-3c5b-4cad-88df-27707d237df7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9qn2s" podUID="e2c54d21-3c5b-4cad-88df-27707d237df7" Nov 12 20:59:48.766606 containerd[1475]: time="2024-11-12T20:59:48.766563496Z" level=error msg="StopPodSandbox for \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\" failed" error="failed to destroy network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.766939 containerd[1475]: time="2024-11-12T20:59:48.766714461Z" level=error msg="StopPodSandbox for \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\" failed" error="failed to destroy network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.767599 kubelet[2615]: E1112 20:59:48.767374 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 20:59:48.767599 kubelet[2615]: E1112 20:59:48.767424 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331"} Nov 12 20:59:48.767599 kubelet[2615]: E1112 20:59:48.767458 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de27d36a-201e-4191-8379-e6120ae4db51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:59:48.767599 kubelet[2615]: E1112 20:59:48.767485 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de27d36a-201e-4191-8379-e6120ae4db51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-zxsdt" podUID="de27d36a-201e-4191-8379-e6120ae4db51" Nov 12 20:59:48.767777 kubelet[2615]: E1112 20:59:48.767514 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 20:59:48.767777 kubelet[2615]: E1112 20:59:48.767526 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c"} Nov 12 20:59:48.767777 kubelet[2615]: E1112 20:59:48.767560 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e456c18a-3106-4a95-8cb0-7da002cc0d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:59:48.767777 kubelet[2615]: E1112 20:59:48.767581 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e456c18a-3106-4a95-8cb0-7da002cc0d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b4bd556db-x6tzm" podUID="e456c18a-3106-4a95-8cb0-7da002cc0d2d" Nov 12 20:59:48.767892 containerd[1475]: time="2024-11-12T20:59:48.767831029Z" level=error msg="StopPodSandbox for \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\" failed" error="failed to destroy network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.768080 kubelet[2615]: E1112 20:59:48.767945 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 20:59:48.768080 kubelet[2615]: E1112 20:59:48.767993 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14"} Nov 12 20:59:48.768080 kubelet[2615]: E1112 20:59:48.768031 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7323a1ea-5ba5-4a75-b521-01e3f15f8119\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:59:48.768080 kubelet[2615]: E1112 20:59:48.768060 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7323a1ea-5ba5-4a75-b521-01e3f15f8119\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2vmjh" podUID="7323a1ea-5ba5-4a75-b521-01e3f15f8119" Nov 12 20:59:48.772649 containerd[1475]: time="2024-11-12T20:59:48.772590907Z" level=error msg="StopPodSandbox for \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\" failed" error="failed to destroy network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:59:48.772847 kubelet[2615]: E1112 20:59:48.772817 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 20:59:48.772896 kubelet[2615]: E1112 20:59:48.772853 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d"} Nov 12 20:59:48.772896 kubelet[2615]: E1112 20:59:48.772883 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a46afc74-9c76-46ea-bc9f-639441ca0037\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:59:48.773034 kubelet[2615]: E1112 20:59:48.772912 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a46afc74-9c76-46ea-bc9f-639441ca0037\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gj9vk" podUID="a46afc74-9c76-46ea-bc9f-639441ca0037" Nov 12 20:59:48.818038 sshd[3601]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:48.821512 systemd[1]: sshd@10-10.0.0.160:22-10.0.0.1:48920.service: Deactivated successfully. Nov 12 20:59:48.823388 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:59:48.824137 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:59:48.824904 systemd-logind[1457]: Removed session 11. Nov 12 20:59:48.883822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d-shm.mount: Deactivated successfully. Nov 12 20:59:53.027764 kubelet[2615]: I1112 20:59:53.027727 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:59:53.028988 kubelet[2615]: E1112 20:59:53.028924 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:53.714162 kubelet[2615]: E1112 20:59:53.714130 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:53.837379 systemd[1]: Started sshd@11-10.0.0.160:22-10.0.0.1:48922.service - OpenSSH per-connection server daemon (10.0.0.1:48922). Nov 12 20:59:53.897103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2178133535.mount: Deactivated successfully. Nov 12 20:59:53.918941 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 48922 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:59:53.920656 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:53.925319 systemd-logind[1457]: New session 12 of user core. Nov 12 20:59:53.933219 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:59:54.876701 sshd[3757]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:54.884739 systemd[1]: sshd@11-10.0.0.160:22-10.0.0.1:48922.service: Deactivated successfully. Nov 12 20:59:54.886475 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:59:54.887893 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:59:54.889612 systemd[1]: Started sshd@12-10.0.0.160:22-10.0.0.1:48938.service - OpenSSH per-connection server daemon (10.0.0.1:48938). Nov 12 20:59:54.890688 systemd-logind[1457]: Removed session 12. Nov 12 20:59:54.931075 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 48938 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:59:54.932449 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:54.936525 systemd-logind[1457]: New session 13 of user core. Nov 12 20:59:54.942101 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:59:55.104651 sshd[3773]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:55.116900 systemd[1]: sshd@12-10.0.0.160:22-10.0.0.1:48938.service: Deactivated successfully. Nov 12 20:59:55.118762 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:59:55.120370 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:59:55.121691 systemd[1]: Started sshd@13-10.0.0.160:22-10.0.0.1:48944.service - OpenSSH per-connection server daemon (10.0.0.1:48944). Nov 12 20:59:55.122416 systemd-logind[1457]: Removed session 13. Nov 12 20:59:55.157813 sshd[3785]: Accepted publickey for core from 10.0.0.1 port 48944 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:59:55.159377 sshd[3785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:55.163075 systemd-logind[1457]: New session 14 of user core. Nov 12 20:59:55.174107 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:59:55.573814 sshd[3785]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:55.578187 systemd[1]: sshd@13-10.0.0.160:22-10.0.0.1:48944.service: Deactivated successfully. Nov 12 20:59:55.581108 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:59:55.582765 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:59:55.583679 systemd-logind[1457]: Removed session 14. Nov 12 20:59:55.597951 containerd[1475]: time="2024-11-12T20:59:55.597874580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:55.598926 containerd[1475]: time="2024-11-12T20:59:55.598873957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:59:55.600161 containerd[1475]: time="2024-11-12T20:59:55.600126060Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:55.602062 containerd[1475]: time="2024-11-12T20:59:55.602022192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:59:55.602624 containerd[1475]: time="2024-11-12T20:59:55.602580501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 6.895587121s" Nov 12 20:59:55.602668 containerd[1475]: time="2024-11-12T20:59:55.602625225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:59:55.610455 containerd[1475]: time="2024-11-12T20:59:55.610405181Z" level=info msg="CreateContainer within sandbox \"4d74aa2ef150c858b65d8a5166d230aeefff161dc8e884f8ec4c56e5cee54124\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:59:55.675318 containerd[1475]: time="2024-11-12T20:59:55.675267176Z" level=info msg="CreateContainer within sandbox \"4d74aa2ef150c858b65d8a5166d230aeefff161dc8e884f8ec4c56e5cee54124\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f7edb9c44f85ae576626b09e786556fe405465a1e77501dca9f904c8ab0a0ff7\"" Nov 12 20:59:55.675752 containerd[1475]: time="2024-11-12T20:59:55.675731147Z" level=info msg="StartContainer for \"f7edb9c44f85ae576626b09e786556fe405465a1e77501dca9f904c8ab0a0ff7\"" Nov 12 20:59:55.740104 systemd[1]: Started cri-containerd-f7edb9c44f85ae576626b09e786556fe405465a1e77501dca9f904c8ab0a0ff7.scope - libcontainer container f7edb9c44f85ae576626b09e786556fe405465a1e77501dca9f904c8ab0a0ff7. Nov 12 20:59:55.774761 containerd[1475]: time="2024-11-12T20:59:55.774711369Z" level=info msg="StartContainer for \"f7edb9c44f85ae576626b09e786556fe405465a1e77501dca9f904c8ab0a0ff7\" returns successfully" Nov 12 20:59:55.838102 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:59:55.838254 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:59:56.722530 kubelet[2615]: E1112 20:59:56.722461 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:59:56.737887 kubelet[2615]: I1112 20:59:56.737853 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-km48h" podStartSLOduration=2.139088919 podStartE2EDuration="20.737817754s" podCreationTimestamp="2024-11-12 20:59:36 +0000 UTC" firstStartedPulling="2024-11-12 20:59:37.004139086 +0000 UTC m=+23.815262337" lastFinishedPulling="2024-11-12 20:59:55.602867911 +0000 UTC m=+42.413991172" observedRunningTime="2024-11-12 20:59:56.737437169 +0000 UTC m=+43.548560420" watchObservedRunningTime="2024-11-12 20:59:56.737817754 +0000 UTC m=+43.548941005" Nov 12 20:59:57.217997 kernel: bpftool[4020]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:59:57.437979 systemd-networkd[1411]: vxlan.calico: Link UP Nov 12 20:59:57.437993 systemd-networkd[1411]: vxlan.calico: Gained carrier Nov 12 20:59:59.298137 systemd-networkd[1411]: vxlan.calico: Gained IPv6LL Nov 12 20:59:59.905499 containerd[1475]: time="2024-11-12T20:59:59.905459127Z" level=info msg="StopPodSandbox for \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\"" Nov 12 20:59:59.909053 kubelet[2615]: E1112 20:59:59.907850 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.149 [INFO][4103] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.149 [INFO][4103] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" iface="eth0" netns="/var/run/netns/cni-e12d6a78-9072-b993-3caf-f116b27605c9" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.150 [INFO][4103] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" iface="eth0" netns="/var/run/netns/cni-e12d6a78-9072-b993-3caf-f116b27605c9" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.150 [INFO][4103] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" iface="eth0" netns="/var/run/netns/cni-e12d6a78-9072-b993-3caf-f116b27605c9" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.150 [INFO][4103] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.150 [INFO][4103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.199 [INFO][4145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" HandleID="k8s-pod-network.b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.200 [INFO][4145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.200 [INFO][4145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.205 [WARNING][4145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" HandleID="k8s-pod-network.b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.205 [INFO][4145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" HandleID="k8s-pod-network.b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.207 [INFO][4145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:00.212234 containerd[1475]: 2024-11-12 21:00:00.209 [INFO][4103] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:00.212686 containerd[1475]: time="2024-11-12T21:00:00.212297445Z" level=info msg="TearDown network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\" successfully" Nov 12 21:00:00.212686 containerd[1475]: time="2024-11-12T21:00:00.212324186Z" level=info msg="StopPodSandbox for \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\" returns successfully" Nov 12 21:00:00.213093 kubelet[2615]: E1112 21:00:00.212966 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:00.213609 containerd[1475]: time="2024-11-12T21:00:00.213564234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gj9vk,Uid:a46afc74-9c76-46ea-bc9f-639441ca0037,Namespace:kube-system,Attempt:1,}" Nov 12 21:00:00.215044 systemd[1]: run-netns-cni\x2de12d6a78\x2d9072\x2db993\x2d3caf\x2df116b27605c9.mount: Deactivated successfully. Nov 12 21:00:00.587017 systemd[1]: Started sshd@14-10.0.0.160:22-10.0.0.1:60286.service - OpenSSH per-connection server daemon (10.0.0.1:60286). Nov 12 21:00:00.625777 systemd-networkd[1411]: cali9c6116b2abe: Link UP Nov 12 21:00:00.626562 systemd-networkd[1411]: cali9c6116b2abe: Gained carrier Nov 12 21:00:00.630996 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 60286 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:00.630460 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:00.637711 systemd-logind[1457]: New session 15 of user core. Nov 12 21:00:00.649095 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.507 [INFO][4152] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--gj9vk-eth0 coredns-76f75df574- kube-system a46afc74-9c76-46ea-bc9f-639441ca0037 867 0 2024-11-12 20:59:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-gj9vk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c6116b2abe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Namespace="kube-system" Pod="coredns-76f75df574-gj9vk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gj9vk-" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.507 [INFO][4152] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Namespace="kube-system" Pod="coredns-76f75df574-gj9vk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.534 [INFO][4165] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" HandleID="k8s-pod-network.c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.540 [INFO][4165] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" HandleID="k8s-pod-network.c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295b70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-gj9vk", "timestamp":"2024-11-12 21:00:00.534145636 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.540 [INFO][4165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.540 [INFO][4165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.540 [INFO][4165] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.542 [INFO][4165] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" host="localhost" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.546 [INFO][4165] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.549 [INFO][4165] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.551 [INFO][4165] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.552 [INFO][4165] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.552 [INFO][4165] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" host="localhost" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.554 [INFO][4165] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.570 [INFO][4165] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" host="localhost" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.620 [INFO][4165] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" host="localhost" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.620 [INFO][4165] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" host="localhost" Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.620 [INFO][4165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:00.662786 containerd[1475]: 2024-11-12 21:00:00.620 [INFO][4165] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" HandleID="k8s-pod-network.c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.663743 containerd[1475]: 2024-11-12 21:00:00.623 [INFO][4152] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Namespace="kube-system" Pod="coredns-76f75df574-gj9vk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gj9vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gj9vk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a46afc74-9c76-46ea-bc9f-639441ca0037", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-gj9vk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c6116b2abe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:00.663743 containerd[1475]: 2024-11-12 21:00:00.623 [INFO][4152] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Namespace="kube-system" Pod="coredns-76f75df574-gj9vk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.663743 containerd[1475]: 2024-11-12 21:00:00.623 [INFO][4152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c6116b2abe ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Namespace="kube-system" Pod="coredns-76f75df574-gj9vk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.663743 containerd[1475]: 2024-11-12 21:00:00.625 [INFO][4152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Namespace="kube-system" Pod="coredns-76f75df574-gj9vk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.663743 containerd[1475]: 2024-11-12 21:00:00.625 [INFO][4152] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Namespace="kube-system" Pod="coredns-76f75df574-gj9vk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gj9vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gj9vk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a46afc74-9c76-46ea-bc9f-639441ca0037", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b", Pod:"coredns-76f75df574-gj9vk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c6116b2abe", MAC:"76:dd:9f:5a:73:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:00.663743 containerd[1475]: 2024-11-12 21:00:00.659 [INFO][4152] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b" Namespace="kube-system" Pod="coredns-76f75df574-gj9vk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:00.711586 containerd[1475]: time="2024-11-12T21:00:00.711502980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:00:00.711586 containerd[1475]: time="2024-11-12T21:00:00.711549397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:00:00.711586 containerd[1475]: time="2024-11-12T21:00:00.711566920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:00.711773 containerd[1475]: time="2024-11-12T21:00:00.711650547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:00.731118 systemd[1]: Started cri-containerd-c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b.scope - libcontainer container c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b. Nov 12 21:00:00.743298 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 21:00:00.767612 containerd[1475]: time="2024-11-12T21:00:00.767547860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gj9vk,Uid:a46afc74-9c76-46ea-bc9f-639441ca0037,Namespace:kube-system,Attempt:1,} returns sandbox id \"c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b\"" Nov 12 21:00:00.767616 sshd[4174]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:00.768696 kubelet[2615]: E1112 21:00:00.768658 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:00.771003 containerd[1475]: time="2024-11-12T21:00:00.770954378Z" level=info msg="CreateContainer within sandbox \"c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 21:00:00.771711 systemd[1]: sshd@14-10.0.0.160:22-10.0.0.1:60286.service: Deactivated successfully. Nov 12 21:00:00.773568 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 21:00:00.774160 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Nov 12 21:00:00.775000 systemd-logind[1457]: Removed session 15. Nov 12 21:00:00.919497 kubelet[2615]: E1112 21:00:00.919414 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:00.962990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301864123.mount: Deactivated successfully. Nov 12 21:00:01.035469 containerd[1475]: time="2024-11-12T21:00:01.035419772Z" level=info msg="CreateContainer within sandbox \"c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48d8a22cad48b4189ec6bb02aae61e7dd7123663c823e0926f41338d591afdc8\"" Nov 12 21:00:01.035885 containerd[1475]: time="2024-11-12T21:00:01.035854839Z" level=info msg="StartContainer for \"48d8a22cad48b4189ec6bb02aae61e7dd7123663c823e0926f41338d591afdc8\"" Nov 12 21:00:01.076141 systemd[1]: Started cri-containerd-48d8a22cad48b4189ec6bb02aae61e7dd7123663c823e0926f41338d591afdc8.scope - libcontainer container 48d8a22cad48b4189ec6bb02aae61e7dd7123663c823e0926f41338d591afdc8. Nov 12 21:00:01.165005 containerd[1475]: time="2024-11-12T21:00:01.164953255Z" level=info msg="StartContainer for \"48d8a22cad48b4189ec6bb02aae61e7dd7123663c823e0926f41338d591afdc8\" returns successfully" Nov 12 21:00:01.923260 kubelet[2615]: E1112 21:00:01.923067 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:01.925845 systemd[1]: run-containerd-runc-k8s.io-48d8a22cad48b4189ec6bb02aae61e7dd7123663c823e0926f41338d591afdc8-runc.KYOCj7.mount: Deactivated successfully. Nov 12 21:00:01.986168 systemd-networkd[1411]: cali9c6116b2abe: Gained IPv6LL Nov 12 21:00:02.062423 kubelet[2615]: I1112 21:00:02.061708 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gj9vk" podStartSLOduration=35.061669803 podStartE2EDuration="35.061669803s" podCreationTimestamp="2024-11-12 20:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 21:00:02.034551145 +0000 UTC m=+48.845674396" watchObservedRunningTime="2024-11-12 21:00:02.061669803 +0000 UTC m=+48.872793054" Nov 12 21:00:02.281440 containerd[1475]: time="2024-11-12T21:00:02.281299225Z" level=info msg="StopPodSandbox for \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\"" Nov 12 21:00:02.281440 containerd[1475]: time="2024-11-12T21:00:02.281342236Z" level=info msg="StopPodSandbox for \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\"" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.427 [INFO][4317] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.427 [INFO][4317] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" iface="eth0" netns="/var/run/netns/cni-c97bb572-5403-4dab-64e8-e8950710aa78" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.427 [INFO][4317] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" iface="eth0" netns="/var/run/netns/cni-c97bb572-5403-4dab-64e8-e8950710aa78" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.428 [INFO][4317] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" iface="eth0" netns="/var/run/netns/cni-c97bb572-5403-4dab-64e8-e8950710aa78" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.428 [INFO][4317] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.428 [INFO][4317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.449 [INFO][4333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" HandleID="k8s-pod-network.3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.449 [INFO][4333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.449 [INFO][4333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.474 [WARNING][4333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" HandleID="k8s-pod-network.3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.474 [INFO][4333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" HandleID="k8s-pod-network.3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.480 [INFO][4333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:02.485179 containerd[1475]: 2024-11-12 21:00:02.482 [INFO][4317] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:02.489656 containerd[1475]: time="2024-11-12T21:00:02.489608677Z" level=info msg="TearDown network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\" successfully" Nov 12 21:00:02.489656 containerd[1475]: time="2024-11-12T21:00:02.489649254Z" level=info msg="StopPodSandbox for \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\" returns successfully" Nov 12 21:00:02.489887 systemd[1]: run-netns-cni\x2dc97bb572\x2d5403\x2d4dab\x2d64e8\x2de8950710aa78.mount: Deactivated successfully. Nov 12 21:00:02.490011 kubelet[2615]: E1112 21:00:02.489990 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:02.491427 containerd[1475]: time="2024-11-12T21:00:02.491384933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9qn2s,Uid:e2c54d21-3c5b-4cad-88df-27707d237df7,Namespace:kube-system,Attempt:1,}" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.474 [INFO][4318] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.474 [INFO][4318] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" iface="eth0" netns="/var/run/netns/cni-674a3f9d-209d-bf1d-a158-c81a8f105884" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.474 [INFO][4318] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" iface="eth0" netns="/var/run/netns/cni-674a3f9d-209d-bf1d-a158-c81a8f105884" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.475 [INFO][4318] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" iface="eth0" netns="/var/run/netns/cni-674a3f9d-209d-bf1d-a158-c81a8f105884" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.475 [INFO][4318] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.475 [INFO][4318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.497 [INFO][4340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" HandleID="k8s-pod-network.5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.497 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.497 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.517 [WARNING][4340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" HandleID="k8s-pod-network.5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.517 [INFO][4340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" HandleID="k8s-pod-network.5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.518 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:02.523676 containerd[1475]: 2024-11-12 21:00:02.521 [INFO][4318] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:02.524043 containerd[1475]: time="2024-11-12T21:00:02.523833892Z" level=info msg="TearDown network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\" successfully" Nov 12 21:00:02.524043 containerd[1475]: time="2024-11-12T21:00:02.523855613Z" level=info msg="StopPodSandbox for \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\" returns successfully" Nov 12 21:00:02.524485 containerd[1475]: time="2024-11-12T21:00:02.524450941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2vmjh,Uid:7323a1ea-5ba5-4a75-b521-01e3f15f8119,Namespace:calico-system,Attempt:1,}" Nov 12 21:00:02.526414 systemd[1]: run-netns-cni\x2d674a3f9d\x2d209d\x2dbf1d\x2da158\x2dc81a8f105884.mount: Deactivated successfully. Nov 12 21:00:02.924573 kubelet[2615]: E1112 21:00:02.924547 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:03.158635 systemd-networkd[1411]: calie9fb8d80d16: Link UP Nov 12 21:00:03.159601 systemd-networkd[1411]: calie9fb8d80d16: Gained carrier Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.058 [INFO][4347] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--9qn2s-eth0 coredns-76f75df574- kube-system e2c54d21-3c5b-4cad-88df-27707d237df7 895 0 2024-11-12 20:59:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-9qn2s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie9fb8d80d16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Namespace="kube-system" Pod="coredns-76f75df574-9qn2s" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--9qn2s-" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.058 [INFO][4347] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Namespace="kube-system" Pod="coredns-76f75df574-9qn2s" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.081 [INFO][4360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" HandleID="k8s-pod-network.375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.089 [INFO][4360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" HandleID="k8s-pod-network.375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000133d20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-9qn2s", "timestamp":"2024-11-12 21:00:03.081851174 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.089 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.089 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.089 [INFO][4360] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.091 [INFO][4360] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" host="localhost" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.094 [INFO][4360] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.098 [INFO][4360] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.099 [INFO][4360] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.101 [INFO][4360] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.101 [INFO][4360] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" host="localhost" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.102 [INFO][4360] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.116 [INFO][4360] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" host="localhost" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.153 [INFO][4360] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" host="localhost" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.153 [INFO][4360] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" host="localhost" Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.153 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:03.203505 containerd[1475]: 2024-11-12 21:00:03.153 [INFO][4360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" HandleID="k8s-pod-network.375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:03.204068 containerd[1475]: 2024-11-12 21:00:03.156 [INFO][4347] cni-plugin/k8s.go 386: Populated endpoint ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Namespace="kube-system" Pod="coredns-76f75df574-9qn2s" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--9qn2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--9qn2s-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e2c54d21-3c5b-4cad-88df-27707d237df7", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-9qn2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9fb8d80d16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:03.204068 containerd[1475]: 2024-11-12 21:00:03.156 [INFO][4347] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Namespace="kube-system" Pod="coredns-76f75df574-9qn2s" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:03.204068 containerd[1475]: 2024-11-12 21:00:03.156 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9fb8d80d16 ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Namespace="kube-system" Pod="coredns-76f75df574-9qn2s" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:03.204068 containerd[1475]: 2024-11-12 21:00:03.159 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Namespace="kube-system" Pod="coredns-76f75df574-9qn2s" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:03.204068 containerd[1475]: 2024-11-12 21:00:03.159 [INFO][4347] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Namespace="kube-system" Pod="coredns-76f75df574-9qn2s" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--9qn2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--9qn2s-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e2c54d21-3c5b-4cad-88df-27707d237df7", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c", Pod:"coredns-76f75df574-9qn2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9fb8d80d16", MAC:"2a:f2:7d:fa:e5:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:03.204068 containerd[1475]: 2024-11-12 21:00:03.200 [INFO][4347] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c" Namespace="kube-system" Pod="coredns-76f75df574-9qn2s" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:03.251210 containerd[1475]: time="2024-11-12T21:00:03.250993038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:00:03.251210 containerd[1475]: time="2024-11-12T21:00:03.251050305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:00:03.251210 containerd[1475]: time="2024-11-12T21:00:03.251069792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:03.251210 containerd[1475]: time="2024-11-12T21:00:03.251185098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:03.272097 systemd[1]: Started cri-containerd-375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c.scope - libcontainer container 375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c. Nov 12 21:00:03.282155 containerd[1475]: time="2024-11-12T21:00:03.282121705Z" level=info msg="StopPodSandbox for \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\"" Nov 12 21:00:03.283110 containerd[1475]: time="2024-11-12T21:00:03.283075596Z" level=info msg="StopPodSandbox for \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\"" Nov 12 21:00:03.285072 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 21:00:03.299133 systemd-networkd[1411]: cali49e740d3df8: Link UP Nov 12 21:00:03.301053 systemd-networkd[1411]: cali49e740d3df8: Gained carrier Nov 12 21:00:03.322312 containerd[1475]: time="2024-11-12T21:00:03.322262697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9qn2s,Uid:e2c54d21-3c5b-4cad-88df-27707d237df7,Namespace:kube-system,Attempt:1,} returns sandbox id \"375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c\"" Nov 12 21:00:03.324015 kubelet[2615]: E1112 21:00:03.323255 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:03.327217 containerd[1475]: time="2024-11-12T21:00:03.326100423Z" level=info msg="CreateContainer within sandbox \"375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.156 [INFO][4370] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2vmjh-eth0 csi-node-driver- calico-system 7323a1ea-5ba5-4a75-b521-01e3f15f8119 896 0 2024-11-12 20:59:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2vmjh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali49e740d3df8 [] []}} ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Namespace="calico-system" Pod="csi-node-driver-2vmjh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2vmjh-" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.156 [INFO][4370] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Namespace="calico-system" Pod="csi-node-driver-2vmjh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.183 [INFO][4384] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" HandleID="k8s-pod-network.3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.204 [INFO][4384] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" HandleID="k8s-pod-network.3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2vmjh", "timestamp":"2024-11-12 21:00:03.183957341 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.204 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.204 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.204 [INFO][4384] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.206 [INFO][4384] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" host="localhost" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.209 [INFO][4384] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.213 [INFO][4384] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.215 [INFO][4384] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.217 [INFO][4384] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.217 [INFO][4384] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" host="localhost" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.218 [INFO][4384] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.252 [INFO][4384] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" host="localhost" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.293 [INFO][4384] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" host="localhost" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.293 [INFO][4384] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" host="localhost" Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.293 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:03.515491 containerd[1475]: 2024-11-12 21:00:03.293 [INFO][4384] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" HandleID="k8s-pod-network.3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:03.516380 containerd[1475]: 2024-11-12 21:00:03.297 [INFO][4370] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Namespace="calico-system" Pod="csi-node-driver-2vmjh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2vmjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2vmjh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7323a1ea-5ba5-4a75-b521-01e3f15f8119", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2vmjh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49e740d3df8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:03.516380 containerd[1475]: 2024-11-12 21:00:03.297 [INFO][4370] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Namespace="calico-system" Pod="csi-node-driver-2vmjh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:03.516380 containerd[1475]: 2024-11-12 21:00:03.297 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49e740d3df8 ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Namespace="calico-system" Pod="csi-node-driver-2vmjh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:03.516380 containerd[1475]: 2024-11-12 21:00:03.299 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Namespace="calico-system" Pod="csi-node-driver-2vmjh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:03.516380 containerd[1475]: 2024-11-12 21:00:03.301 [INFO][4370] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Namespace="calico-system" Pod="csi-node-driver-2vmjh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2vmjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2vmjh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7323a1ea-5ba5-4a75-b521-01e3f15f8119", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c", Pod:"csi-node-driver-2vmjh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49e740d3df8", MAC:"a2:11:12:40:16:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:03.516380 containerd[1475]: 2024-11-12 21:00:03.512 [INFO][4370] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c" Namespace="calico-system" Pod="csi-node-driver-2vmjh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:03.603819 containerd[1475]: time="2024-11-12T21:00:03.602435777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:00:03.603819 containerd[1475]: time="2024-11-12T21:00:03.603082280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:00:03.603819 containerd[1475]: time="2024-11-12T21:00:03.603096487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:03.603819 containerd[1475]: time="2024-11-12T21:00:03.603170606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:03.627114 systemd[1]: Started cri-containerd-3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c.scope - libcontainer container 3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c. Nov 12 21:00:03.640252 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 21:00:03.650433 containerd[1475]: time="2024-11-12T21:00:03.650398867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2vmjh,Uid:7323a1ea-5ba5-4a75-b521-01e3f15f8119,Namespace:calico-system,Attempt:1,} returns sandbox id \"3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c\"" Nov 12 21:00:03.652014 containerd[1475]: time="2024-11-12T21:00:03.651987290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.607 [INFO][4472] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.607 [INFO][4472] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" iface="eth0" netns="/var/run/netns/cni-47334aec-e21d-34db-d386-72ba668504e4" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.607 [INFO][4472] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" iface="eth0" netns="/var/run/netns/cni-47334aec-e21d-34db-d386-72ba668504e4" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.608 [INFO][4472] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" iface="eth0" netns="/var/run/netns/cni-47334aec-e21d-34db-d386-72ba668504e4" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.608 [INFO][4472] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.608 [INFO][4472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.636 [INFO][4535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" HandleID="k8s-pod-network.bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.636 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.636 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.696 [WARNING][4535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" HandleID="k8s-pod-network.bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.696 [INFO][4535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" HandleID="k8s-pod-network.bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.698 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:03.702372 containerd[1475]: 2024-11-12 21:00:03.700 [INFO][4472] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:03.702694 containerd[1475]: time="2024-11-12T21:00:03.702516039Z" level=info msg="TearDown network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\" successfully" Nov 12 21:00:03.702694 containerd[1475]: time="2024-11-12T21:00:03.702542268Z" level=info msg="StopPodSandbox for \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\" returns successfully" Nov 12 21:00:03.703132 containerd[1475]: time="2024-11-12T21:00:03.703110204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4bd556db-x6tzm,Uid:e456c18a-3106-4a95-8cb0-7da002cc0d2d,Namespace:calico-system,Attempt:1,}" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.605 [INFO][4473] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.606 [INFO][4473] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" iface="eth0" netns="/var/run/netns/cni-5266394a-a489-bb75-a3bf-e9be706aa31f" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.606 [INFO][4473] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" iface="eth0" netns="/var/run/netns/cni-5266394a-a489-bb75-a3bf-e9be706aa31f" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.606 [INFO][4473] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" iface="eth0" netns="/var/run/netns/cni-5266394a-a489-bb75-a3bf-e9be706aa31f" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.606 [INFO][4473] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.606 [INFO][4473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.641 [INFO][4529] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" HandleID="k8s-pod-network.036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.641 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.698 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.736 [WARNING][4529] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" HandleID="k8s-pod-network.036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.736 [INFO][4529] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" HandleID="k8s-pod-network.036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.738 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:03.742916 containerd[1475]: 2024-11-12 21:00:03.740 [INFO][4473] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:03.743430 containerd[1475]: time="2024-11-12T21:00:03.743078270Z" level=info msg="TearDown network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\" successfully" Nov 12 21:00:03.743430 containerd[1475]: time="2024-11-12T21:00:03.743106173Z" level=info msg="StopPodSandbox for \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\" returns successfully" Nov 12 21:00:03.743884 containerd[1475]: time="2024-11-12T21:00:03.743853647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfb9bbd9d-f7tm4,Uid:3b947df0-74b2-4106-85b7-80347fe0a3b9,Namespace:calico-apiserver,Attempt:1,}" Nov 12 21:00:03.929247 kubelet[2615]: E1112 21:00:03.929145 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:03.966218 systemd[1]: run-netns-cni\x2d5266394a\x2da489\x2dbb75\x2da3bf\x2de9be706aa31f.mount: Deactivated successfully. Nov 12 21:00:03.966317 systemd[1]: run-netns-cni\x2d47334aec\x2de21d\x2d34db\x2dd386\x2d72ba668504e4.mount: Deactivated successfully. Nov 12 21:00:03.966907 containerd[1475]: time="2024-11-12T21:00:03.966869942Z" level=info msg="CreateContainer within sandbox \"375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0497b93b1690c1250ba2bf7e5747c06c8e7f1427db600bb2266be0f250cdae3c\"" Nov 12 21:00:03.967730 containerd[1475]: time="2024-11-12T21:00:03.967604692Z" level=info msg="StartContainer for \"0497b93b1690c1250ba2bf7e5747c06c8e7f1427db600bb2266be0f250cdae3c\"" Nov 12 21:00:03.997092 systemd[1]: Started cri-containerd-0497b93b1690c1250ba2bf7e5747c06c8e7f1427db600bb2266be0f250cdae3c.scope - libcontainer container 0497b93b1690c1250ba2bf7e5747c06c8e7f1427db600bb2266be0f250cdae3c. Nov 12 21:00:04.061109 containerd[1475]: time="2024-11-12T21:00:04.061040361Z" level=info msg="StartContainer for \"0497b93b1690c1250ba2bf7e5747c06c8e7f1427db600bb2266be0f250cdae3c\" returns successfully" Nov 12 21:00:04.283235 containerd[1475]: time="2024-11-12T21:00:04.282862558Z" level=info msg="StopPodSandbox for \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\"" Nov 12 21:00:04.314075 systemd-networkd[1411]: calic4844a9522e: Link UP Nov 12 21:00:04.316783 systemd-networkd[1411]: calic4844a9522e: Gained carrier Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.236 [INFO][4607] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0 calico-kube-controllers-7b4bd556db- calico-system e456c18a-3106-4a95-8cb0-7da002cc0d2d 911 0 2024-11-12 20:59:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b4bd556db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b4bd556db-x6tzm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic4844a9522e [] []}} ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Namespace="calico-system" Pod="calico-kube-controllers-7b4bd556db-x6tzm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.236 [INFO][4607] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Namespace="calico-system" Pod="calico-kube-controllers-7b4bd556db-x6tzm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.270 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" HandleID="k8s-pod-network.b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.278 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" HandleID="k8s-pod-network.b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e8fa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b4bd556db-x6tzm", "timestamp":"2024-11-12 21:00:04.270043927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.278 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.278 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.278 [INFO][4635] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.279 [INFO][4635] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" host="localhost" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.284 [INFO][4635] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.290 [INFO][4635] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.292 [INFO][4635] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.294 [INFO][4635] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.294 [INFO][4635] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" host="localhost" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.295 [INFO][4635] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.299 [INFO][4635] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" host="localhost" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.305 [INFO][4635] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" host="localhost" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.305 [INFO][4635] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" host="localhost" Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.305 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:04.332012 containerd[1475]: 2024-11-12 21:00:04.305 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" HandleID="k8s-pod-network.b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:04.333042 containerd[1475]: 2024-11-12 21:00:04.310 [INFO][4607] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Namespace="calico-system" Pod="calico-kube-controllers-7b4bd556db-x6tzm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0", GenerateName:"calico-kube-controllers-7b4bd556db-", Namespace:"calico-system", SelfLink:"", UID:"e456c18a-3106-4a95-8cb0-7da002cc0d2d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4bd556db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b4bd556db-x6tzm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4844a9522e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:04.333042 containerd[1475]: 2024-11-12 21:00:04.310 [INFO][4607] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Namespace="calico-system" Pod="calico-kube-controllers-7b4bd556db-x6tzm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:04.333042 containerd[1475]: 2024-11-12 21:00:04.310 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4844a9522e ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Namespace="calico-system" Pod="calico-kube-controllers-7b4bd556db-x6tzm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:04.333042 containerd[1475]: 2024-11-12 21:00:04.316 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Namespace="calico-system" Pod="calico-kube-controllers-7b4bd556db-x6tzm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:04.333042 containerd[1475]: 2024-11-12 21:00:04.318 [INFO][4607] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Namespace="calico-system" Pod="calico-kube-controllers-7b4bd556db-x6tzm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0", GenerateName:"calico-kube-controllers-7b4bd556db-", Namespace:"calico-system", SelfLink:"", UID:"e456c18a-3106-4a95-8cb0-7da002cc0d2d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4bd556db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd", Pod:"calico-kube-controllers-7b4bd556db-x6tzm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4844a9522e", MAC:"ce:ec:90:6f:32:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:04.333042 containerd[1475]: 2024-11-12 21:00:04.328 [INFO][4607] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd" Namespace="calico-system" Pod="calico-kube-controllers-7b4bd556db-x6tzm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:04.354954 systemd-networkd[1411]: cali5a9bb8cdb70: Link UP Nov 12 21:00:04.356124 systemd-networkd[1411]: cali5a9bb8cdb70: Gained carrier Nov 12 21:00:04.365287 containerd[1475]: time="2024-11-12T21:00:04.365190407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:00:04.365520 containerd[1475]: time="2024-11-12T21:00:04.365438132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:00:04.365520 containerd[1475]: time="2024-11-12T21:00:04.365479379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:04.365922 containerd[1475]: time="2024-11-12T21:00:04.365860755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.254 [INFO][4620] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0 calico-apiserver-7dfb9bbd9d- calico-apiserver 3b947df0-74b2-4106-85b7-80347fe0a3b9 912 0 2024-11-12 20:59:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dfb9bbd9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7dfb9bbd9d-f7tm4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a9bb8cdb70 [] []}} ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-f7tm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.255 [INFO][4620] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-f7tm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.281 [INFO][4641] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" HandleID="k8s-pod-network.6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.291 [INFO][4641] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" HandleID="k8s-pod-network.6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290e00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7dfb9bbd9d-f7tm4", "timestamp":"2024-11-12 21:00:04.28185754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.291 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.305 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.305 [INFO][4641] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.309 [INFO][4641] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" host="localhost" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.314 [INFO][4641] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.323 [INFO][4641] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.325 [INFO][4641] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.329 [INFO][4641] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.329 [INFO][4641] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" host="localhost" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.331 [INFO][4641] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89 Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.339 [INFO][4641] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" host="localhost" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.347 [INFO][4641] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" host="localhost" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.347 [INFO][4641] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" host="localhost" Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.347 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:04.376603 containerd[1475]: 2024-11-12 21:00:04.347 [INFO][4641] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" HandleID="k8s-pod-network.6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:04.377319 containerd[1475]: 2024-11-12 21:00:04.350 [INFO][4620] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-f7tm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0", GenerateName:"calico-apiserver-7dfb9bbd9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b947df0-74b2-4106-85b7-80347fe0a3b9", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfb9bbd9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7dfb9bbd9d-f7tm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a9bb8cdb70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:04.377319 containerd[1475]: 2024-11-12 21:00:04.350 [INFO][4620] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-f7tm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:04.377319 containerd[1475]: 2024-11-12 21:00:04.350 [INFO][4620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a9bb8cdb70 ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-f7tm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:04.377319 containerd[1475]: 2024-11-12 21:00:04.357 [INFO][4620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-f7tm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:04.377319 containerd[1475]: 2024-11-12 21:00:04.357 [INFO][4620] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-f7tm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0", GenerateName:"calico-apiserver-7dfb9bbd9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b947df0-74b2-4106-85b7-80347fe0a3b9", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfb9bbd9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89", Pod:"calico-apiserver-7dfb9bbd9d-f7tm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a9bb8cdb70", MAC:"2a:3e:05:c8:ea:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:04.377319 containerd[1475]: 2024-11-12 21:00:04.372 [INFO][4620] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-f7tm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:04.392247 systemd[1]: Started cri-containerd-b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd.scope - libcontainer container b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd. Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.337 [INFO][4666] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.337 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" iface="eth0" netns="/var/run/netns/cni-8b4c8a54-3a61-f191-7a5f-434faba153b2" Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.338 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" iface="eth0" netns="/var/run/netns/cni-8b4c8a54-3a61-f191-7a5f-434faba153b2" Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.339 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" iface="eth0" netns="/var/run/netns/cni-8b4c8a54-3a61-f191-7a5f-434faba153b2" Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.339 [INFO][4666] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.339 [INFO][4666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.380 [INFO][4684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" HandleID="k8s-pod-network.c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.382 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.382 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.390 [WARNING][4684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" HandleID="k8s-pod-network.c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.391 [INFO][4684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" HandleID="k8s-pod-network.c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.392 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:04.400679 containerd[1475]: 2024-11-12 21:00:04.396 [INFO][4666] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:04.401915 containerd[1475]: time="2024-11-12T21:00:04.401470355Z" level=info msg="TearDown network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\" successfully" Nov 12 21:00:04.401915 containerd[1475]: time="2024-11-12T21:00:04.401507184Z" level=info msg="StopPodSandbox for \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\" returns successfully" Nov 12 21:00:04.402303 containerd[1475]: time="2024-11-12T21:00:04.402275978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfb9bbd9d-zxsdt,Uid:de27d36a-201e-4191-8379-e6120ae4db51,Namespace:calico-apiserver,Attempt:1,}" Nov 12 21:00:04.405182 containerd[1475]: time="2024-11-12T21:00:04.404926996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:00:04.405182 containerd[1475]: time="2024-11-12T21:00:04.405012707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:00:04.405182 containerd[1475]: time="2024-11-12T21:00:04.405026653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:04.405182 containerd[1475]: time="2024-11-12T21:00:04.405116562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:04.406812 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 21:00:04.418123 systemd-networkd[1411]: cali49e740d3df8: Gained IPv6LL Nov 12 21:00:04.426177 systemd[1]: Started cri-containerd-6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89.scope - libcontainer container 6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89. Nov 12 21:00:04.438892 containerd[1475]: time="2024-11-12T21:00:04.438834849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4bd556db-x6tzm,Uid:e456c18a-3106-4a95-8cb0-7da002cc0d2d,Namespace:calico-system,Attempt:1,} returns sandbox id \"b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd\"" Nov 12 21:00:04.446108 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 21:00:04.472634 containerd[1475]: time="2024-11-12T21:00:04.472583393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfb9bbd9d-f7tm4,Uid:3b947df0-74b2-4106-85b7-80347fe0a3b9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89\"" Nov 12 21:00:04.518020 systemd-networkd[1411]: cali76dd4988ef8: Link UP Nov 12 21:00:04.518229 systemd-networkd[1411]: cali76dd4988ef8: Gained carrier Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.460 [INFO][4773] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0 calico-apiserver-7dfb9bbd9d- calico-apiserver de27d36a-201e-4191-8379-e6120ae4db51 935 0 2024-11-12 20:59:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dfb9bbd9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7dfb9bbd9d-zxsdt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali76dd4988ef8 [] []}} ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-zxsdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.460 [INFO][4773] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-zxsdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.487 [INFO][4807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" HandleID="k8s-pod-network.924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.494 [INFO][4807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" HandleID="k8s-pod-network.924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132c40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7dfb9bbd9d-zxsdt", "timestamp":"2024-11-12 21:00:04.487239897 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.494 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.494 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.494 [INFO][4807] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.495 [INFO][4807] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" host="localhost" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.498 [INFO][4807] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.502 [INFO][4807] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.503 [INFO][4807] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.504 [INFO][4807] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.505 [INFO][4807] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" host="localhost" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.506 [INFO][4807] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4 Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.509 [INFO][4807] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" host="localhost" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.513 [INFO][4807] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" host="localhost" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.513 [INFO][4807] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" host="localhost" Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.513 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:04.535058 containerd[1475]: 2024-11-12 21:00:04.513 [INFO][4807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" HandleID="k8s-pod-network.924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.535590 containerd[1475]: 2024-11-12 21:00:04.515 [INFO][4773] cni-plugin/k8s.go 386: Populated endpoint ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-zxsdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0", GenerateName:"calico-apiserver-7dfb9bbd9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"de27d36a-201e-4191-8379-e6120ae4db51", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfb9bbd9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7dfb9bbd9d-zxsdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76dd4988ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:04.535590 containerd[1475]: 2024-11-12 21:00:04.515 [INFO][4773] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-zxsdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.535590 containerd[1475]: 2024-11-12 21:00:04.515 [INFO][4773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76dd4988ef8 ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-zxsdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.535590 containerd[1475]: 2024-11-12 21:00:04.518 [INFO][4773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-zxsdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.535590 containerd[1475]: 2024-11-12 21:00:04.518 [INFO][4773] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-zxsdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0", GenerateName:"calico-apiserver-7dfb9bbd9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"de27d36a-201e-4191-8379-e6120ae4db51", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfb9bbd9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4", Pod:"calico-apiserver-7dfb9bbd9d-zxsdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76dd4988ef8", MAC:"fe:14:81:f0:74:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:04.535590 containerd[1475]: 2024-11-12 21:00:04.530 [INFO][4773] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4" Namespace="calico-apiserver" Pod="calico-apiserver-7dfb9bbd9d-zxsdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:04.555136 containerd[1475]: time="2024-11-12T21:00:04.555009607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:00:04.555136 containerd[1475]: time="2024-11-12T21:00:04.555069719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:00:04.555136 containerd[1475]: time="2024-11-12T21:00:04.555082183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:04.555320 containerd[1475]: time="2024-11-12T21:00:04.555171832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:00:04.575107 systemd[1]: Started cri-containerd-924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4.scope - libcontainer container 924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4. Nov 12 21:00:04.586139 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 21:00:04.612772 containerd[1475]: time="2024-11-12T21:00:04.612715329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfb9bbd9d-zxsdt,Uid:de27d36a-201e-4191-8379-e6120ae4db51,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4\"" Nov 12 21:00:04.802137 systemd-networkd[1411]: calie9fb8d80d16: Gained IPv6LL Nov 12 21:00:04.933251 kubelet[2615]: E1112 21:00:04.933219 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:04.942188 kubelet[2615]: I1112 21:00:04.942144 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9qn2s" podStartSLOduration=37.942101845 podStartE2EDuration="37.942101845s" podCreationTimestamp="2024-11-12 20:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 21:00:04.941644135 +0000 UTC m=+51.752767406" watchObservedRunningTime="2024-11-12 21:00:04.942101845 +0000 UTC m=+51.753225096" Nov 12 21:00:04.971172 systemd[1]: run-netns-cni\x2d8b4c8a54\x2d3a61\x2df191\x2d7a5f\x2d434faba153b2.mount: Deactivated successfully. Nov 12 21:00:05.786043 systemd[1]: Started sshd@15-10.0.0.160:22-10.0.0.1:53970.service - OpenSSH per-connection server daemon (10.0.0.1:53970). Nov 12 21:00:05.825436 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 53970 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:05.827122 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:05.830845 systemd-logind[1457]: New session 16 of user core. Nov 12 21:00:05.837083 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 21:00:05.938789 kubelet[2615]: E1112 21:00:05.938717 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:05.955949 sshd[4880]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:05.960240 systemd[1]: sshd@15-10.0.0.160:22-10.0.0.1:53970.service: Deactivated successfully. Nov 12 21:00:05.962317 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 21:00:05.963260 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Nov 12 21:00:05.964388 systemd-logind[1457]: Removed session 16. Nov 12 21:00:06.018126 systemd-networkd[1411]: cali5a9bb8cdb70: Gained IPv6LL Nov 12 21:00:06.210172 systemd-networkd[1411]: calic4844a9522e: Gained IPv6LL Nov 12 21:00:06.466160 systemd-networkd[1411]: cali76dd4988ef8: Gained IPv6LL Nov 12 21:00:06.940643 kubelet[2615]: E1112 21:00:06.940602 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:07.656188 containerd[1475]: time="2024-11-12T21:00:07.656141425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:07.656928 containerd[1475]: time="2024-11-12T21:00:07.656894750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 21:00:07.658031 containerd[1475]: time="2024-11-12T21:00:07.658007239Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:07.661449 containerd[1475]: time="2024-11-12T21:00:07.661415509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:07.662140 containerd[1475]: time="2024-11-12T21:00:07.662099592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 4.010016933s" Nov 12 21:00:07.662140 containerd[1475]: time="2024-11-12T21:00:07.662139828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 21:00:07.662633 containerd[1475]: time="2024-11-12T21:00:07.662613788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 21:00:07.663597 containerd[1475]: time="2024-11-12T21:00:07.663576166Z" level=info msg="CreateContainer within sandbox \"3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 21:00:07.679901 containerd[1475]: time="2024-11-12T21:00:07.679870701Z" level=info msg="CreateContainer within sandbox \"3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"725d53648a3b80863c669128c57f8361d89c63a001e84b27160a5e4f16bef507\"" Nov 12 21:00:07.680369 containerd[1475]: time="2024-11-12T21:00:07.680334882Z" level=info msg="StartContainer for \"725d53648a3b80863c669128c57f8361d89c63a001e84b27160a5e4f16bef507\"" Nov 12 21:00:07.716130 systemd[1]: Started cri-containerd-725d53648a3b80863c669128c57f8361d89c63a001e84b27160a5e4f16bef507.scope - libcontainer container 725d53648a3b80863c669128c57f8361d89c63a001e84b27160a5e4f16bef507. Nov 12 21:00:07.801820 containerd[1475]: time="2024-11-12T21:00:07.801773222Z" level=info msg="StartContainer for \"725d53648a3b80863c669128c57f8361d89c63a001e84b27160a5e4f16bef507\" returns successfully" Nov 12 21:00:10.376122 containerd[1475]: time="2024-11-12T21:00:10.376031446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:10.377382 containerd[1475]: time="2024-11-12T21:00:10.377315699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 21:00:10.378549 containerd[1475]: time="2024-11-12T21:00:10.378508186Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:10.381317 containerd[1475]: time="2024-11-12T21:00:10.381255781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:10.383983 containerd[1475]: time="2024-11-12T21:00:10.383244908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.720601042s" Nov 12 21:00:10.383983 containerd[1475]: time="2024-11-12T21:00:10.383292310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 21:00:10.385803 containerd[1475]: time="2024-11-12T21:00:10.385776013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 21:00:10.393829 containerd[1475]: time="2024-11-12T21:00:10.393719096Z" level=info msg="CreateContainer within sandbox \"b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 21:00:10.427274 containerd[1475]: time="2024-11-12T21:00:10.427223607Z" level=info msg="CreateContainer within sandbox \"b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"01a8de55fee15be4209a4c606b93cbdc795d7281df975a9472fa9dafbe8d0ec6\"" Nov 12 21:00:10.427851 containerd[1475]: time="2024-11-12T21:00:10.427826634Z" level=info msg="StartContainer for \"01a8de55fee15be4209a4c606b93cbdc795d7281df975a9472fa9dafbe8d0ec6\"" Nov 12 21:00:10.457101 systemd[1]: Started cri-containerd-01a8de55fee15be4209a4c606b93cbdc795d7281df975a9472fa9dafbe8d0ec6.scope - libcontainer container 01a8de55fee15be4209a4c606b93cbdc795d7281df975a9472fa9dafbe8d0ec6. Nov 12 21:00:10.500572 containerd[1475]: time="2024-11-12T21:00:10.500512307Z" level=info msg="StartContainer for \"01a8de55fee15be4209a4c606b93cbdc795d7281df975a9472fa9dafbe8d0ec6\" returns successfully" Nov 12 21:00:10.962883 kubelet[2615]: I1112 21:00:10.962844 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b4bd556db-x6tzm" podStartSLOduration=29.018718204 podStartE2EDuration="34.962802478s" podCreationTimestamp="2024-11-12 20:59:36 +0000 UTC" firstStartedPulling="2024-11-12 21:00:04.440192358 +0000 UTC m=+51.251315610" lastFinishedPulling="2024-11-12 21:00:10.384276623 +0000 UTC m=+57.195399884" observedRunningTime="2024-11-12 21:00:10.962580049 +0000 UTC m=+57.773703300" watchObservedRunningTime="2024-11-12 21:00:10.962802478 +0000 UTC m=+57.773925729" Nov 12 21:00:10.977942 systemd[1]: Started sshd@16-10.0.0.160:22-10.0.0.1:53974.service - OpenSSH per-connection server daemon (10.0.0.1:53974). Nov 12 21:00:11.016003 sshd[4998]: Accepted publickey for core from 10.0.0.1 port 53974 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:11.016512 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:11.022055 systemd-logind[1457]: New session 17 of user core. Nov 12 21:00:11.032115 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 21:00:11.295515 sshd[4998]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:11.300337 systemd[1]: sshd@16-10.0.0.160:22-10.0.0.1:53974.service: Deactivated successfully. Nov 12 21:00:11.303060 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 21:00:11.303924 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Nov 12 21:00:11.305855 systemd-logind[1457]: Removed session 17. Nov 12 21:00:13.279263 containerd[1475]: time="2024-11-12T21:00:13.279220893Z" level=info msg="StopPodSandbox for \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\"" Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:13.530 [WARNING][5037] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0", GenerateName:"calico-kube-controllers-7b4bd556db-", Namespace:"calico-system", SelfLink:"", UID:"e456c18a-3106-4a95-8cb0-7da002cc0d2d", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4bd556db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd", Pod:"calico-kube-controllers-7b4bd556db-x6tzm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4844a9522e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:13.530 [INFO][5037] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:13.530 [INFO][5037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" iface="eth0" netns="" Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:13.530 [INFO][5037] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:13.530 [INFO][5037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:14.030 [INFO][5048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" HandleID="k8s-pod-network.bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:14.030 [INFO][5048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:14.030 [INFO][5048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:14.035 [WARNING][5048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" HandleID="k8s-pod-network.bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:14.035 [INFO][5048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" HandleID="k8s-pod-network.bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:14.037 [INFO][5048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:14.042985 containerd[1475]: 2024-11-12 21:00:14.040 [INFO][5037] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:14.043405 containerd[1475]: time="2024-11-12T21:00:14.043039249Z" level=info msg="TearDown network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\" successfully" Nov 12 21:00:14.043405 containerd[1475]: time="2024-11-12T21:00:14.043064237Z" level=info msg="StopPodSandbox for \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\" returns successfully" Nov 12 21:00:14.052500 containerd[1475]: time="2024-11-12T21:00:14.052436066Z" level=info msg="RemovePodSandbox for \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\"" Nov 12 21:00:14.055088 containerd[1475]: time="2024-11-12T21:00:14.055056068Z" level=info msg="Forcibly stopping sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\"" Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.097 [WARNING][5072] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0", GenerateName:"calico-kube-controllers-7b4bd556db-", Namespace:"calico-system", SelfLink:"", UID:"e456c18a-3106-4a95-8cb0-7da002cc0d2d", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4bd556db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b607bdc7eb98b2f07453a2014a3051a7ff931057858c6642cf46febd713e27cd", Pod:"calico-kube-controllers-7b4bd556db-x6tzm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4844a9522e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.098 [INFO][5072] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.098 [INFO][5072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" iface="eth0" netns="" Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.098 [INFO][5072] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.098 [INFO][5072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.136 [INFO][5083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" HandleID="k8s-pod-network.bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.136 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.136 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.145 [WARNING][5083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" HandleID="k8s-pod-network.bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.145 [INFO][5083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" HandleID="k8s-pod-network.bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Workload="localhost-k8s-calico--kube--controllers--7b4bd556db--x6tzm-eth0" Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.146 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:14.157031 containerd[1475]: 2024-11-12 21:00:14.150 [INFO][5072] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c" Nov 12 21:00:14.157031 containerd[1475]: time="2024-11-12T21:00:14.155391809Z" level=info msg="TearDown network for sandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\" successfully" Nov 12 21:00:14.209492 containerd[1475]: time="2024-11-12T21:00:14.209416621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:00:14.209492 containerd[1475]: time="2024-11-12T21:00:14.209499380Z" level=info msg="RemovePodSandbox \"bc4e2d4a9d7c998a3e538cf46fe8dba3b84d97fbee5efe8f4956a6559f601b6c\" returns successfully" Nov 12 21:00:14.210290 containerd[1475]: time="2024-11-12T21:00:14.210216403Z" level=info msg="StopPodSandbox for \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\"" Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.285 [WARNING][5105] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gj9vk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a46afc74-9c76-46ea-bc9f-639441ca0037", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b", Pod:"coredns-76f75df574-gj9vk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c6116b2abe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.286 [INFO][5105] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.286 [INFO][5105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" iface="eth0" netns="" Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.286 [INFO][5105] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.287 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.313 [INFO][5113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" HandleID="k8s-pod-network.b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.313 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.313 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.321 [WARNING][5113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" HandleID="k8s-pod-network.b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.321 [INFO][5113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" HandleID="k8s-pod-network.b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.322 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:14.328407 containerd[1475]: 2024-11-12 21:00:14.325 [INFO][5105] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:14.328407 containerd[1475]: time="2024-11-12T21:00:14.328321755Z" level=info msg="TearDown network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\" successfully" Nov 12 21:00:14.328407 containerd[1475]: time="2024-11-12T21:00:14.328356582Z" level=info msg="StopPodSandbox for \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\" returns successfully" Nov 12 21:00:14.330617 containerd[1475]: time="2024-11-12T21:00:14.330569748Z" level=info msg="RemovePodSandbox for \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\"" Nov 12 21:00:14.330663 containerd[1475]: time="2024-11-12T21:00:14.330622070Z" level=info msg="Forcibly stopping sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\"" Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.373 [WARNING][5136] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gj9vk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a46afc74-9c76-46ea-bc9f-639441ca0037", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8097acfcc4f021ba427ed4015c030ca9ff17a3d7c2848ebd8f5bd60e7ee067b", Pod:"coredns-76f75df574-gj9vk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c6116b2abe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.374 [INFO][5136] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.374 [INFO][5136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" iface="eth0" netns="" Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.374 [INFO][5136] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.374 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.400 [INFO][5144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" HandleID="k8s-pod-network.b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.401 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.401 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.408 [WARNING][5144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" HandleID="k8s-pod-network.b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.408 [INFO][5144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" HandleID="k8s-pod-network.b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Workload="localhost-k8s-coredns--76f75df574--gj9vk-eth0" Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.409 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:14.417902 containerd[1475]: 2024-11-12 21:00:14.415 [INFO][5136] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d" Nov 12 21:00:14.418601 containerd[1475]: time="2024-11-12T21:00:14.418552312Z" level=info msg="TearDown network for sandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\" successfully" Nov 12 21:00:14.467472 containerd[1475]: time="2024-11-12T21:00:14.467237008Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:00:14.467472 containerd[1475]: time="2024-11-12T21:00:14.467344356Z" level=info msg="RemovePodSandbox \"b85815973d360a69417829ec2e3930a873f02b0c2b4ed87f8058be31a2fbdd6d\" returns successfully" Nov 12 21:00:14.468526 containerd[1475]: time="2024-11-12T21:00:14.468477220Z" level=info msg="StopPodSandbox for \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\"" Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.570 [WARNING][5166] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--9qn2s-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e2c54d21-3c5b-4cad-88df-27707d237df7", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c", Pod:"coredns-76f75df574-9qn2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9fb8d80d16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.571 [INFO][5166] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.571 [INFO][5166] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" iface="eth0" netns="" Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.571 [INFO][5166] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.571 [INFO][5166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.607 [INFO][5176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" HandleID="k8s-pod-network.3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.607 [INFO][5176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.607 [INFO][5176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.614 [WARNING][5176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" HandleID="k8s-pod-network.3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.614 [INFO][5176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" HandleID="k8s-pod-network.3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.616 [INFO][5176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:14.621570 containerd[1475]: 2024-11-12 21:00:14.618 [INFO][5166] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:14.621570 containerd[1475]: time="2024-11-12T21:00:14.621518639Z" level=info msg="TearDown network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\" successfully" Nov 12 21:00:14.621570 containerd[1475]: time="2024-11-12T21:00:14.621548818Z" level=info msg="StopPodSandbox for \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\" returns successfully" Nov 12 21:00:14.625029 containerd[1475]: time="2024-11-12T21:00:14.623176666Z" level=info msg="RemovePodSandbox for \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\"" Nov 12 21:00:14.625029 containerd[1475]: time="2024-11-12T21:00:14.623204179Z" level=info msg="Forcibly stopping sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\"" Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.711 [WARNING][5199] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--9qn2s-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e2c54d21-3c5b-4cad-88df-27707d237df7", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"375023bdb8fb67eb204e70f923363977d669288261adabaab3b994be5b748e9c", Pod:"coredns-76f75df574-9qn2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9fb8d80d16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.711 [INFO][5199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.711 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" iface="eth0" netns="" Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.711 [INFO][5199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.711 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.741 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" HandleID="k8s-pod-network.3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.742 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.742 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.748 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" HandleID="k8s-pod-network.3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.748 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" HandleID="k8s-pod-network.3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Workload="localhost-k8s-coredns--76f75df574--9qn2s-eth0" Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.750 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:14.760237 containerd[1475]: 2024-11-12 21:00:14.757 [INFO][5199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381" Nov 12 21:00:14.760690 containerd[1475]: time="2024-11-12T21:00:14.760362145Z" level=info msg="TearDown network for sandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\" successfully" Nov 12 21:00:15.195418 containerd[1475]: time="2024-11-12T21:00:15.195343052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:00:15.195418 containerd[1475]: time="2024-11-12T21:00:15.195421503Z" level=info msg="RemovePodSandbox \"3957496e0c56aca8e613088ad167f07d4b93864f63d6c1f66ed39cfb58d4f381\" returns successfully" Nov 12 21:00:15.196163 containerd[1475]: time="2024-11-12T21:00:15.196070103Z" level=info msg="StopPodSandbox for \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\"" Nov 12 21:00:15.213251 containerd[1475]: time="2024-11-12T21:00:15.213135712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 21:00:15.218079 containerd[1475]: time="2024-11-12T21:00:15.218018259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:15.262494 containerd[1475]: time="2024-11-12T21:00:15.262439402Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:15.266030 containerd[1475]: time="2024-11-12T21:00:15.265984842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:15.268700 containerd[1475]: time="2024-11-12T21:00:15.267474603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 4.881658102s" Nov 12 21:00:15.268700 containerd[1475]: time="2024-11-12T21:00:15.267745314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 21:00:15.269290 containerd[1475]: time="2024-11-12T21:00:15.269267016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 21:00:15.270461 containerd[1475]: time="2024-11-12T21:00:15.270392265Z" level=info msg="CreateContainer within sandbox \"6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.236 [WARNING][5230] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0", GenerateName:"calico-apiserver-7dfb9bbd9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b947df0-74b2-4106-85b7-80347fe0a3b9", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfb9bbd9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89", Pod:"calico-apiserver-7dfb9bbd9d-f7tm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a9bb8cdb70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.237 [INFO][5230] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.237 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" iface="eth0" netns="" Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.237 [INFO][5230] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.237 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.266 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" HandleID="k8s-pod-network.036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.266 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.266 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.271 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" HandleID="k8s-pod-network.036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.271 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" HandleID="k8s-pod-network.036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.272 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:15.279100 containerd[1475]: 2024-11-12 21:00:15.275 [INFO][5230] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:15.279100 containerd[1475]: time="2024-11-12T21:00:15.278953268Z" level=info msg="TearDown network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\" successfully" Nov 12 21:00:15.279100 containerd[1475]: time="2024-11-12T21:00:15.278995229Z" level=info msg="StopPodSandbox for \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\" returns successfully" Nov 12 21:00:15.281228 containerd[1475]: time="2024-11-12T21:00:15.281158417Z" level=info msg="RemovePodSandbox for \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\"" Nov 12 21:00:15.281228 containerd[1475]: time="2024-11-12T21:00:15.281208894Z" level=info msg="Forcibly stopping sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\"" Nov 12 21:00:15.372055 containerd[1475]: time="2024-11-12T21:00:15.371105713Z" level=info msg="CreateContainer within sandbox \"6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b3f82b62894866294f45721ddd4c7bcd2f80c36b2320532e2bd2034ea7441452\"" Nov 12 21:00:15.372055 containerd[1475]: time="2024-11-12T21:00:15.371998953Z" level=info msg="StartContainer for \"b3f82b62894866294f45721ddd4c7bcd2f80c36b2320532e2bd2034ea7441452\"" Nov 12 21:00:15.411402 systemd[1]: Started cri-containerd-b3f82b62894866294f45721ddd4c7bcd2f80c36b2320532e2bd2034ea7441452.scope - libcontainer container b3f82b62894866294f45721ddd4c7bcd2f80c36b2320532e2bd2034ea7441452. Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.366 [WARNING][5264] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0", GenerateName:"calico-apiserver-7dfb9bbd9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b947df0-74b2-4106-85b7-80347fe0a3b9", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfb9bbd9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6105d42c5e675c15487557d593c69f1c1919eb6f71e3a13c3dd81129a2f5ba89", Pod:"calico-apiserver-7dfb9bbd9d-f7tm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a9bb8cdb70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.367 [INFO][5264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.367 [INFO][5264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" iface="eth0" netns="" Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.367 [INFO][5264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.367 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.395 [INFO][5272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" HandleID="k8s-pod-network.036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.395 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.395 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.409 [WARNING][5272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" HandleID="k8s-pod-network.036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.409 [INFO][5272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" HandleID="k8s-pod-network.036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--f7tm4-eth0" Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.411 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:15.420077 containerd[1475]: 2024-11-12 21:00:15.415 [INFO][5264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d" Nov 12 21:00:15.421469 containerd[1475]: time="2024-11-12T21:00:15.421011112Z" level=info msg="TearDown network for sandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\" successfully" Nov 12 21:00:15.425608 containerd[1475]: time="2024-11-12T21:00:15.425550157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:00:15.425784 containerd[1475]: time="2024-11-12T21:00:15.425629440Z" level=info msg="RemovePodSandbox \"036cf7aea3cfaf3c3ddf6c6cc9f9e94408c0eee786994c2a49649c63a0d8f32d\" returns successfully" Nov 12 21:00:15.426572 containerd[1475]: time="2024-11-12T21:00:15.426544683Z" level=info msg="StopPodSandbox for \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\"" Nov 12 21:00:15.477439 containerd[1475]: time="2024-11-12T21:00:15.477170591Z" level=info msg="StartContainer for \"b3f82b62894866294f45721ddd4c7bcd2f80c36b2320532e2bd2034ea7441452\" returns successfully" Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.482 [WARNING][5320] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0", GenerateName:"calico-apiserver-7dfb9bbd9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"de27d36a-201e-4191-8379-e6120ae4db51", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfb9bbd9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4", Pod:"calico-apiserver-7dfb9bbd9d-zxsdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76dd4988ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.482 [INFO][5320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.482 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" iface="eth0" netns="" Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.482 [INFO][5320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.482 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.514 [INFO][5340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" HandleID="k8s-pod-network.c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.514 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.514 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.520 [WARNING][5340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" HandleID="k8s-pod-network.c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.520 [INFO][5340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" HandleID="k8s-pod-network.c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.522 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:15.528008 containerd[1475]: 2024-11-12 21:00:15.525 [INFO][5320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:15.528608 containerd[1475]: time="2024-11-12T21:00:15.528010060Z" level=info msg="TearDown network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\" successfully" Nov 12 21:00:15.528608 containerd[1475]: time="2024-11-12T21:00:15.528038064Z" level=info msg="StopPodSandbox for \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\" returns successfully" Nov 12 21:00:15.528608 containerd[1475]: time="2024-11-12T21:00:15.528522257Z" level=info msg="RemovePodSandbox for \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\"" Nov 12 21:00:15.528608 containerd[1475]: time="2024-11-12T21:00:15.528554970Z" level=info msg="Forcibly stopping sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\"" Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.567 [WARNING][5368] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0", GenerateName:"calico-apiserver-7dfb9bbd9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"de27d36a-201e-4191-8379-e6120ae4db51", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfb9bbd9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4", Pod:"calico-apiserver-7dfb9bbd9d-zxsdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76dd4988ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.568 [INFO][5368] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.568 [INFO][5368] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" iface="eth0" netns="" Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.568 [INFO][5368] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.568 [INFO][5368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.593 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" HandleID="k8s-pod-network.c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.593 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.593 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.599 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" HandleID="k8s-pod-network.c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.599 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" HandleID="k8s-pod-network.c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Workload="localhost-k8s-calico--apiserver--7dfb9bbd9d--zxsdt-eth0" Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.600 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:15.605565 containerd[1475]: 2024-11-12 21:00:15.602 [INFO][5368] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331" Nov 12 21:00:15.608121 containerd[1475]: time="2024-11-12T21:00:15.606085402Z" level=info msg="TearDown network for sandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\" successfully" Nov 12 21:00:15.684380 containerd[1475]: time="2024-11-12T21:00:15.684317568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:00:15.684380 containerd[1475]: time="2024-11-12T21:00:15.684399335Z" level=info msg="RemovePodSandbox \"c6451c035061189b968da1e2ab8abfe458bd92bcbe35a67c5413f38f5f668331\" returns successfully" Nov 12 21:00:15.684990 containerd[1475]: time="2024-11-12T21:00:15.684923004Z" level=info msg="StopPodSandbox for \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\"" Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.723 [WARNING][5397] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2vmjh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7323a1ea-5ba5-4a75-b521-01e3f15f8119", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c", Pod:"csi-node-driver-2vmjh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49e740d3df8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.723 [INFO][5397] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.723 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" iface="eth0" netns="" Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.723 [INFO][5397] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.723 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.745 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" HandleID="k8s-pod-network.5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.745 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.745 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.751 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" HandleID="k8s-pod-network.5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.751 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" HandleID="k8s-pod-network.5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.752 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:15.758715 containerd[1475]: 2024-11-12 21:00:15.755 [INFO][5397] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:15.759625 containerd[1475]: time="2024-11-12T21:00:15.758832440Z" level=info msg="TearDown network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\" successfully" Nov 12 21:00:15.759625 containerd[1475]: time="2024-11-12T21:00:15.758856086Z" level=info msg="StopPodSandbox for \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\" returns successfully" Nov 12 21:00:15.759736 containerd[1475]: time="2024-11-12T21:00:15.759719189Z" level=info msg="RemovePodSandbox for \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\"" Nov 12 21:00:15.759736 containerd[1475]: time="2024-11-12T21:00:15.759744819Z" level=info msg="Forcibly stopping sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\"" Nov 12 21:00:15.779909 containerd[1475]: time="2024-11-12T21:00:15.779859041Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:15.780926 containerd[1475]: time="2024-11-12T21:00:15.780693048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 21:00:15.785173 containerd[1475]: time="2024-11-12T21:00:15.785127952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 515.756825ms" Nov 12 21:00:15.785173 containerd[1475]: time="2024-11-12T21:00:15.785159964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 21:00:15.787512 containerd[1475]: time="2024-11-12T21:00:15.787485585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 21:00:15.788604 containerd[1475]: time="2024-11-12T21:00:15.788582678Z" level=info msg="CreateContainer within sandbox \"924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 21:00:15.823830 containerd[1475]: time="2024-11-12T21:00:15.823764160Z" level=info msg="CreateContainer within sandbox \"924234e7427389e4e542f3a25a38c70a83d98d65f4cd91c29ddba10a889396b4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2953cefa535c2b7b8fc3c7d50c9315a5d97ee4c1501b2c34d6bb6e7b2abb15d5\"" Nov 12 21:00:15.826174 containerd[1475]: time="2024-11-12T21:00:15.825216789Z" level=info msg="StartContainer for \"2953cefa535c2b7b8fc3c7d50c9315a5d97ee4c1501b2c34d6bb6e7b2abb15d5\"" Nov 12 21:00:15.865249 systemd[1]: Started cri-containerd-2953cefa535c2b7b8fc3c7d50c9315a5d97ee4c1501b2c34d6bb6e7b2abb15d5.scope - libcontainer container 2953cefa535c2b7b8fc3c7d50c9315a5d97ee4c1501b2c34d6bb6e7b2abb15d5. Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.814 [WARNING][5428] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2vmjh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7323a1ea-5ba5-4a75-b521-01e3f15f8119", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c", Pod:"csi-node-driver-2vmjh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49e740d3df8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.814 [INFO][5428] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.814 [INFO][5428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" iface="eth0" netns="" Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.814 [INFO][5428] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.814 [INFO][5428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.846 [INFO][5435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" HandleID="k8s-pod-network.5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.846 [INFO][5435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.846 [INFO][5435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.881 [WARNING][5435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" HandleID="k8s-pod-network.5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.881 [INFO][5435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" HandleID="k8s-pod-network.5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Workload="localhost-k8s-csi--node--driver--2vmjh-eth0" Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.883 [INFO][5435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:00:15.890190 containerd[1475]: 2024-11-12 21:00:15.886 [INFO][5428] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14" Nov 12 21:00:15.892060 containerd[1475]: time="2024-11-12T21:00:15.890768103Z" level=info msg="TearDown network for sandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\" successfully" Nov 12 21:00:15.896380 containerd[1475]: time="2024-11-12T21:00:15.896352331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:00:15.896563 containerd[1475]: time="2024-11-12T21:00:15.896544532Z" level=info msg="RemovePodSandbox \"5e6ddb06c34e3ea309e68cc5f3eca36cff44ec7f6ed72b2e319e79fabd6b1e14\" returns successfully" Nov 12 21:00:15.960855 containerd[1475]: time="2024-11-12T21:00:15.960814096Z" level=info msg="StartContainer for \"2953cefa535c2b7b8fc3c7d50c9315a5d97ee4c1501b2c34d6bb6e7b2abb15d5\" returns successfully" Nov 12 21:00:15.994439 kubelet[2615]: I1112 21:00:15.994407 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-f7tm4" podStartSLOduration=29.199891511 podStartE2EDuration="39.994365918s" podCreationTimestamp="2024-11-12 20:59:36 +0000 UTC" firstStartedPulling="2024-11-12 21:00:04.473828442 +0000 UTC m=+51.284951693" lastFinishedPulling="2024-11-12 21:00:15.268302839 +0000 UTC m=+62.079426100" observedRunningTime="2024-11-12 21:00:15.992922307 +0000 UTC m=+62.804045558" watchObservedRunningTime="2024-11-12 21:00:15.994365918 +0000 UTC m=+62.805489169" Nov 12 21:00:15.995336 kubelet[2615]: I1112 21:00:15.995321 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dfb9bbd9d-zxsdt" podStartSLOduration=28.823237538 podStartE2EDuration="39.995298404s" podCreationTimestamp="2024-11-12 20:59:36 +0000 UTC" firstStartedPulling="2024-11-12 21:00:04.61375932 +0000 UTC m=+51.424882571" lastFinishedPulling="2024-11-12 21:00:15.785820186 +0000 UTC m=+62.596943437" observedRunningTime="2024-11-12 21:00:15.981094248 +0000 UTC m=+62.792217499" watchObservedRunningTime="2024-11-12 21:00:15.995298404 +0000 UTC m=+62.806421655" Nov 12 21:00:16.309639 systemd[1]: Started sshd@17-10.0.0.160:22-10.0.0.1:53504.service - OpenSSH per-connection server daemon (10.0.0.1:53504). Nov 12 21:00:16.361269 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 53504 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:16.363041 sshd[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:16.372779 systemd-logind[1457]: New session 18 of user core. Nov 12 21:00:16.380185 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 21:00:16.506059 sshd[5488]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:16.515556 systemd[1]: sshd@17-10.0.0.160:22-10.0.0.1:53504.service: Deactivated successfully. Nov 12 21:00:16.518344 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 21:00:16.520452 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Nov 12 21:00:16.525883 systemd[1]: Started sshd@18-10.0.0.160:22-10.0.0.1:53508.service - OpenSSH per-connection server daemon (10.0.0.1:53508). Nov 12 21:00:16.527406 systemd-logind[1457]: Removed session 18. Nov 12 21:00:16.562150 sshd[5505]: Accepted publickey for core from 10.0.0.1 port 53508 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:16.567133 sshd[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:16.574116 systemd-logind[1457]: New session 19 of user core. Nov 12 21:00:16.586218 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 21:00:16.849056 sshd[5505]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:16.857132 systemd[1]: sshd@18-10.0.0.160:22-10.0.0.1:53508.service: Deactivated successfully. Nov 12 21:00:16.859021 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 21:00:16.860637 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Nov 12 21:00:16.869417 systemd[1]: Started sshd@19-10.0.0.160:22-10.0.0.1:53524.service - OpenSSH per-connection server daemon (10.0.0.1:53524). Nov 12 21:00:16.870593 systemd-logind[1457]: Removed session 19. Nov 12 21:00:16.902013 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 53524 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:16.903769 sshd[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:16.908857 systemd-logind[1457]: New session 20 of user core. Nov 12 21:00:16.919126 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 21:00:16.974431 kubelet[2615]: I1112 21:00:16.974394 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:00:16.974431 kubelet[2615]: I1112 21:00:16.974416 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:00:17.325301 containerd[1475]: time="2024-11-12T21:00:17.325253557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:17.327440 containerd[1475]: time="2024-11-12T21:00:17.327397412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 21:00:17.330260 containerd[1475]: time="2024-11-12T21:00:17.330220474Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:17.333302 containerd[1475]: time="2024-11-12T21:00:17.333260743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:00:17.333847 containerd[1475]: time="2024-11-12T21:00:17.333798658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.54611853s" Nov 12 21:00:17.333847 containerd[1475]: time="2024-11-12T21:00:17.333837773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 21:00:17.336037 containerd[1475]: time="2024-11-12T21:00:17.335996366Z" level=info msg="CreateContainer within sandbox \"3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 21:00:17.353961 containerd[1475]: time="2024-11-12T21:00:17.353906013Z" level=info msg="CreateContainer within sandbox \"3bc225e8af5d8abd9ec5f3aaeb1232e8959d1cd77f23df406610663e6775067c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8a31a39696593f20a69d06b8191ac02a7b4052f4544746233fc25aa9de982385\"" Nov 12 21:00:17.354449 containerd[1475]: time="2024-11-12T21:00:17.354410935Z" level=info msg="StartContainer for \"8a31a39696593f20a69d06b8191ac02a7b4052f4544746233fc25aa9de982385\"" Nov 12 21:00:17.408577 systemd[1]: Started cri-containerd-8a31a39696593f20a69d06b8191ac02a7b4052f4544746233fc25aa9de982385.scope - libcontainer container 8a31a39696593f20a69d06b8191ac02a7b4052f4544746233fc25aa9de982385. Nov 12 21:00:17.562455 containerd[1475]: time="2024-11-12T21:00:17.562316988Z" level=info msg="StartContainer for \"8a31a39696593f20a69d06b8191ac02a7b4052f4544746233fc25aa9de982385\" returns successfully" Nov 12 21:00:17.923513 kubelet[2615]: I1112 21:00:17.923467 2615 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 21:00:17.924385 kubelet[2615]: I1112 21:00:17.924364 2615 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 21:00:17.988575 kubelet[2615]: I1112 21:00:17.988514 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-2vmjh" podStartSLOduration=28.305986868 podStartE2EDuration="41.988475501s" podCreationTimestamp="2024-11-12 20:59:36 +0000 UTC" firstStartedPulling="2024-11-12 21:00:03.651583441 +0000 UTC m=+50.462706692" lastFinishedPulling="2024-11-12 21:00:17.334072074 +0000 UTC m=+64.145195325" observedRunningTime="2024-11-12 21:00:17.98805433 +0000 UTC m=+64.799177601" watchObservedRunningTime="2024-11-12 21:00:17.988475501 +0000 UTC m=+64.799598752" Nov 12 21:00:18.331326 systemd[1]: run-containerd-runc-k8s.io-01a8de55fee15be4209a4c606b93cbdc795d7281df975a9472fa9dafbe8d0ec6-runc.fgkyye.mount: Deactivated successfully. Nov 12 21:00:18.401124 kubelet[2615]: I1112 21:00:18.401083 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:00:18.511224 sshd[5518]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:18.524755 systemd[1]: Started sshd@20-10.0.0.160:22-10.0.0.1:53526.service - OpenSSH per-connection server daemon (10.0.0.1:53526). Nov 12 21:00:18.525733 systemd[1]: sshd@19-10.0.0.160:22-10.0.0.1:53524.service: Deactivated successfully. Nov 12 21:00:18.528896 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 21:00:18.532575 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Nov 12 21:00:18.533941 systemd-logind[1457]: Removed session 20. Nov 12 21:00:18.568458 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 53526 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:18.570508 sshd[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:18.574801 systemd-logind[1457]: New session 21 of user core. Nov 12 21:00:18.586110 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 21:00:18.793050 sshd[5620]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:18.802945 systemd[1]: sshd@20-10.0.0.160:22-10.0.0.1:53526.service: Deactivated successfully. Nov 12 21:00:18.804771 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 21:00:18.806407 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Nov 12 21:00:18.807752 systemd[1]: Started sshd@21-10.0.0.160:22-10.0.0.1:53532.service - OpenSSH per-connection server daemon (10.0.0.1:53532). Nov 12 21:00:18.808532 systemd-logind[1457]: Removed session 21. Nov 12 21:00:18.844233 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 53532 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:18.845689 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:18.849810 systemd-logind[1457]: New session 22 of user core. Nov 12 21:00:18.859106 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 21:00:18.973157 sshd[5635]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:18.977299 systemd[1]: sshd@21-10.0.0.160:22-10.0.0.1:53532.service: Deactivated successfully. Nov 12 21:00:18.979344 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 21:00:18.979904 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Nov 12 21:00:18.980925 systemd-logind[1457]: Removed session 22. Nov 12 21:00:23.988902 systemd[1]: Started sshd@22-10.0.0.160:22-10.0.0.1:53534.service - OpenSSH per-connection server daemon (10.0.0.1:53534). Nov 12 21:00:24.027697 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 53534 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:24.029265 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:24.033648 systemd-logind[1457]: New session 23 of user core. Nov 12 21:00:24.044128 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 21:00:24.156953 sshd[5656]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:24.161908 systemd[1]: sshd@22-10.0.0.160:22-10.0.0.1:53534.service: Deactivated successfully. Nov 12 21:00:24.164090 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 21:00:24.164866 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Nov 12 21:00:24.165799 systemd-logind[1457]: Removed session 23. Nov 12 21:00:27.492208 kubelet[2615]: I1112 21:00:27.492168 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:00:29.173376 systemd[1]: Started sshd@23-10.0.0.160:22-10.0.0.1:42800.service - OpenSSH per-connection server daemon (10.0.0.1:42800). Nov 12 21:00:29.213552 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 42800 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:29.215282 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:29.221388 systemd-logind[1457]: New session 24 of user core. Nov 12 21:00:29.230138 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 21:00:29.347934 sshd[5680]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:29.352125 systemd[1]: sshd@23-10.0.0.160:22-10.0.0.1:42800.service: Deactivated successfully. Nov 12 21:00:29.354212 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 21:00:29.354983 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Nov 12 21:00:29.355898 systemd-logind[1457]: Removed session 24. Nov 12 21:00:34.358775 systemd[1]: Started sshd@24-10.0.0.160:22-10.0.0.1:42802.service - OpenSSH per-connection server daemon (10.0.0.1:42802). Nov 12 21:00:34.394351 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 42802 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:34.395790 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:34.399870 systemd-logind[1457]: New session 25 of user core. Nov 12 21:00:34.409192 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 21:00:34.513791 sshd[5694]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:34.517748 systemd[1]: sshd@24-10.0.0.160:22-10.0.0.1:42802.service: Deactivated successfully. Nov 12 21:00:34.519692 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 21:00:34.520242 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Nov 12 21:00:34.521013 systemd-logind[1457]: Removed session 25. Nov 12 21:00:37.281953 kubelet[2615]: E1112 21:00:37.281920 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 21:00:39.525742 systemd[1]: Started sshd@25-10.0.0.160:22-10.0.0.1:44118.service - OpenSSH per-connection server daemon (10.0.0.1:44118). Nov 12 21:00:39.562406 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 44118 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 21:00:39.563753 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:39.567631 systemd-logind[1457]: New session 26 of user core. Nov 12 21:00:39.576095 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 21:00:39.684873 sshd[5727]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:39.688778 systemd[1]: sshd@25-10.0.0.160:22-10.0.0.1:44118.service: Deactivated successfully. Nov 12 21:00:39.690675 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 21:00:39.691288 systemd-logind[1457]: Session 26 logged out. Waiting for processes to exit. Nov 12 21:00:39.692113 systemd-logind[1457]: Removed session 26.