Nov 12 20:44:53.921959 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:44:53.921989 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:44:53.922011 kernel: BIOS-provided physical RAM map: Nov 12 20:44:53.922025 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:44:53.922031 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 20:44:53.922037 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 20:44:53.922045 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 20:44:53.922051 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 20:44:53.922060 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 12 20:44:53.922071 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 12 20:44:53.922084 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 12 20:44:53.922091 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 12 20:44:53.922097 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 12 20:44:53.922103 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 12 20:44:53.922112 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 12 20:44:53.922125 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 20:44:53.922138 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 12 20:44:53.922147 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 12 20:44:53.922156 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 20:44:53.922164 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:44:53.922171 kernel: NX (Execute Disable) protection: active Nov 12 20:44:53.922178 kernel: APIC: Static calls initialized Nov 12 20:44:53.922184 kernel: efi: EFI v2.7 by EDK II Nov 12 20:44:53.922191 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Nov 12 20:44:53.922198 kernel: SMBIOS 2.8 present. Nov 12 20:44:53.922204 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 12 20:44:53.922211 kernel: Hypervisor detected: KVM Nov 12 20:44:53.922222 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:44:53.922228 kernel: kvm-clock: using sched offset of 5869967030 cycles Nov 12 20:44:53.922236 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:44:53.922243 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:44:53.922250 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:44:53.922257 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:44:53.922264 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 12 20:44:53.922271 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:44:53.922278 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:44:53.922288 kernel: Using GB pages for direct mapping Nov 12 20:44:53.922296 kernel: Secure boot disabled Nov 12 20:44:53.922302 kernel: ACPI: Early table checksum verification disabled Nov 12 20:44:53.922310 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 12 20:44:53.922324 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 12 20:44:53.922331 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:44:53.922339 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:44:53.922349 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 12 20:44:53.922356 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:44:53.922363 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:44:53.922370 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:44:53.922378 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:44:53.922495 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 12 20:44:53.922502 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 12 20:44:53.922513 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Nov 12 20:44:53.922521 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 12 20:44:53.922528 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 12 20:44:53.922535 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 12 20:44:53.922550 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 12 20:44:53.922558 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 12 20:44:53.922566 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 12 20:44:53.922575 kernel: No NUMA configuration found Nov 12 20:44:53.922583 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 12 20:44:53.922593 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 12 20:44:53.922601 kernel: Zone ranges: Nov 12 20:44:53.922608 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:44:53.922615 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 12 20:44:53.922623 kernel: Normal empty Nov 12 20:44:53.922630 kernel: Movable zone start for each node Nov 12 20:44:53.922638 kernel: Early memory node ranges Nov 12 20:44:53.922647 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:44:53.922656 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 12 20:44:53.922665 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 12 20:44:53.922677 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 12 20:44:53.922686 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 12 20:44:53.922695 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 12 20:44:53.922704 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 12 20:44:53.922716 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:44:53.922725 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:44:53.922734 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 12 20:44:53.922743 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:44:53.922752 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 12 20:44:53.922765 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 12 20:44:53.922774 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 12 20:44:53.922783 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:44:53.922792 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:44:53.922801 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:44:53.922810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:44:53.922819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:44:53.922829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:44:53.922838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:44:53.922850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:44:53.922859 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:44:53.922868 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:44:53.922875 kernel: TSC deadline timer available Nov 12 20:44:53.922882 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:44:53.922890 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:44:53.922897 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:44:53.922904 kernel: kvm-guest: setup PV sched yield Nov 12 20:44:53.922912 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:44:53.922922 kernel: Booting paravirtualized kernel on KVM Nov 12 20:44:53.922929 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:44:53.922937 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:44:53.922944 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:44:53.922951 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:44:53.922958 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:44:53.922965 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:44:53.922973 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:44:53.922984 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:44:53.922998 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:44:53.923013 kernel: random: crng init done Nov 12 20:44:53.923022 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:44:53.923032 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:44:53.923041 kernel: Fallback order for Node 0: 0 Nov 12 20:44:53.923050 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 12 20:44:53.923060 kernel: Policy zone: DMA32 Nov 12 20:44:53.923070 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:44:53.923084 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 171124K reserved, 0K cma-reserved) Nov 12 20:44:53.923092 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:44:53.923099 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:44:53.923106 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:44:53.923114 kernel: Dynamic Preempt: voluntary Nov 12 20:44:53.923130 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:44:53.923141 kernel: rcu: RCU event tracing is enabled. Nov 12 20:44:53.923149 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:44:53.923156 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:44:53.923164 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:44:53.923172 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:44:53.923179 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:44:53.923191 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:44:53.923205 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:44:53.923219 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:44:53.923230 kernel: Console: colour dummy device 80x25 Nov 12 20:44:53.923244 kernel: printk: console [ttyS0] enabled Nov 12 20:44:53.923257 kernel: ACPI: Core revision 20230628 Nov 12 20:44:53.923265 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:44:53.923273 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:44:53.923280 kernel: x2apic enabled Nov 12 20:44:53.923288 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:44:53.923296 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:44:53.923304 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:44:53.923311 kernel: kvm-guest: setup PV IPIs Nov 12 20:44:53.923319 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:44:53.923329 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:44:53.923337 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:44:53.923345 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:44:53.923352 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:44:53.923360 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:44:53.923368 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:44:53.923375 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:44:53.923398 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:44:53.923406 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:44:53.923417 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:44:53.923424 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:44:53.923435 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:44:53.923455 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:44:53.923463 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:44:53.923472 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:44:53.923480 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:44:53.923487 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:44:53.923498 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:44:53.923506 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:44:53.923514 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:44:53.923521 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:44:53.923529 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:44:53.923544 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:44:53.923552 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:44:53.923560 kernel: landlock: Up and running. Nov 12 20:44:53.923568 kernel: SELinux: Initializing. Nov 12 20:44:53.923582 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:44:53.923592 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:44:53.923601 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:44:53.923611 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:44:53.923621 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:44:53.923630 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:44:53.923640 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:44:53.923649 kernel: ... version: 0 Nov 12 20:44:53.923659 kernel: ... bit width: 48 Nov 12 20:44:53.923672 kernel: ... generic registers: 6 Nov 12 20:44:53.923681 kernel: ... value mask: 0000ffffffffffff Nov 12 20:44:53.923690 kernel: ... max period: 00007fffffffffff Nov 12 20:44:53.923700 kernel: ... fixed-purpose events: 0 Nov 12 20:44:53.923709 kernel: ... event mask: 000000000000003f Nov 12 20:44:53.923718 kernel: signal: max sigframe size: 1776 Nov 12 20:44:53.923725 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:44:53.923733 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:44:53.923741 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:44:53.923751 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:44:53.923758 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:44:53.923766 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:44:53.923773 kernel: smpboot: Max logical packages: 1 Nov 12 20:44:53.923781 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:44:53.923789 kernel: devtmpfs: initialized Nov 12 20:44:53.923796 kernel: x86/mm: Memory block size: 128MB Nov 12 20:44:53.923804 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 12 20:44:53.923812 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 12 20:44:53.923822 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 12 20:44:53.923830 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 12 20:44:53.923838 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 12 20:44:53.923846 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:44:53.923853 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:44:53.923861 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:44:53.923868 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:44:53.923876 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:44:53.923884 kernel: audit: type=2000 audit(1731444292.855:1): state=initialized audit_enabled=0 res=1 Nov 12 20:44:53.923894 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:44:53.923902 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:44:53.923910 kernel: cpuidle: using governor menu Nov 12 20:44:53.923917 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:44:53.923925 kernel: dca service started, version 1.12.1 Nov 12 20:44:53.923933 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:44:53.923940 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:44:53.923948 kernel: PCI: Using configuration type 1 for base access Nov 12 20:44:53.923956 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:44:53.923966 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:44:53.923974 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:44:53.923981 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:44:53.923989 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:44:53.923997 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:44:53.924004 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:44:53.924012 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:44:53.924019 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:44:53.924027 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:44:53.924037 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:44:53.924045 kernel: ACPI: Interpreter enabled Nov 12 20:44:53.924052 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:44:53.924059 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:44:53.924067 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:44:53.924075 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:44:53.924082 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:44:53.924090 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:44:53.924335 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:44:53.924578 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:44:53.924712 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:44:53.924722 kernel: PCI host bridge to bus 0000:00 Nov 12 20:44:53.924874 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:44:53.924996 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:44:53.925113 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:44:53.925234 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:44:53.925349 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:44:53.925483 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 12 20:44:53.925626 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:44:53.925805 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:44:53.925994 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:44:53.926167 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 12 20:44:53.926323 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 12 20:44:53.926665 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 12 20:44:53.926845 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 12 20:44:53.927014 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:44:53.927217 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:44:53.927454 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 12 20:44:53.927692 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 12 20:44:53.927906 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 12 20:44:53.928202 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:44:53.928356 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 12 20:44:53.928613 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 12 20:44:53.928784 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 12 20:44:53.928998 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:44:53.929178 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 12 20:44:53.929340 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 12 20:44:53.929561 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 12 20:44:53.929719 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 12 20:44:53.929898 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:44:53.930129 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:44:53.930342 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:44:53.930585 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 12 20:44:53.930715 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 12 20:44:53.930862 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:44:53.931026 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 12 20:44:53.931039 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:44:53.931048 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:44:53.931056 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:44:53.931092 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:44:53.931102 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:44:53.931119 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:44:53.931127 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:44:53.931135 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:44:53.931143 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:44:53.931151 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:44:53.931159 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:44:53.931167 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:44:53.931183 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:44:53.931191 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:44:53.931199 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:44:53.931206 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:44:53.931214 kernel: iommu: Default domain type: Translated Nov 12 20:44:53.931222 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:44:53.931230 kernel: efivars: Registered efivars operations Nov 12 20:44:53.931238 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:44:53.931246 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:44:53.931257 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 12 20:44:53.931265 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 12 20:44:53.931273 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 12 20:44:53.931280 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 12 20:44:53.931451 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:44:53.931640 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:44:53.931844 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:44:53.931857 kernel: vgaarb: loaded Nov 12 20:44:53.931865 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:44:53.931879 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:44:53.931893 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:44:53.931901 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:44:53.931909 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:44:53.931917 kernel: pnp: PnP ACPI init Nov 12 20:44:53.932117 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:44:53.932136 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:44:53.932144 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:44:53.932164 kernel: NET: Registered PF_INET protocol family Nov 12 20:44:53.932181 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:44:53.932190 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:44:53.932206 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:44:53.932217 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:44:53.932225 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:44:53.932233 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:44:53.932241 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:44:53.932249 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:44:53.932264 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:44:53.932272 kernel: NET: Registered PF_XDP protocol family Nov 12 20:44:53.932477 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 12 20:44:53.932650 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 12 20:44:53.932800 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:44:53.932947 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:44:53.933082 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:44:53.933226 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:44:53.933411 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:44:53.933533 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 12 20:44:53.933557 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:44:53.933567 kernel: Initialise system trusted keyrings Nov 12 20:44:53.933575 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:44:53.933583 kernel: Key type asymmetric registered Nov 12 20:44:53.933591 kernel: Asymmetric key parser 'x509' registered Nov 12 20:44:53.933599 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:44:53.933612 kernel: io scheduler mq-deadline registered Nov 12 20:44:53.933622 kernel: io scheduler kyber registered Nov 12 20:44:53.933630 kernel: io scheduler bfq registered Nov 12 20:44:53.933638 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:44:53.933647 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:44:53.933663 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:44:53.933680 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:44:53.933692 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:44:53.933704 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:44:53.933722 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:44:53.933734 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:44:53.933751 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:44:53.933763 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:44:53.933991 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:44:53.934165 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:44:53.934319 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:44:53 UTC (1731444293) Nov 12 20:44:53.934501 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:44:53.934523 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:44:53.934535 kernel: efifb: probing for efifb Nov 12 20:44:53.934556 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 12 20:44:53.934566 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 12 20:44:53.934580 kernel: efifb: scrolling: redraw Nov 12 20:44:53.934591 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 12 20:44:53.934603 kernel: Console: switching to colour frame buffer device 100x37 Nov 12 20:44:53.934643 kernel: fb0: EFI VGA frame buffer device Nov 12 20:44:53.934659 kernel: pstore: Using crash dump compression: deflate Nov 12 20:44:53.934676 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:44:53.934689 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:44:53.934701 kernel: Segment Routing with IPv6 Nov 12 20:44:53.934714 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:44:53.934726 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:44:53.934738 kernel: Key type dns_resolver registered Nov 12 20:44:53.934749 kernel: IPI shorthand broadcast: enabled Nov 12 20:44:53.934760 kernel: sched_clock: Marking stable (1017003271, 125499483)->(1292189412, -149686658) Nov 12 20:44:53.934770 kernel: registered taskstats version 1 Nov 12 20:44:53.934783 kernel: Loading compiled-in X.509 certificates Nov 12 20:44:53.934794 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:44:53.934804 kernel: Key type .fscrypt registered Nov 12 20:44:53.934814 kernel: Key type fscrypt-provisioning registered Nov 12 20:44:53.934824 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:44:53.934834 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:44:53.934844 kernel: ima: No architecture policies found Nov 12 20:44:53.934854 kernel: clk: Disabling unused clocks Nov 12 20:44:53.934864 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:44:53.934878 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:44:53.934891 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:44:53.934901 kernel: Run /init as init process Nov 12 20:44:53.934911 kernel: with arguments: Nov 12 20:44:53.934922 kernel: /init Nov 12 20:44:53.934933 kernel: with environment: Nov 12 20:44:53.934942 kernel: HOME=/ Nov 12 20:44:53.934950 kernel: TERM=linux Nov 12 20:44:53.934958 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:44:53.934973 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:44:53.934983 systemd[1]: Detected virtualization kvm. Nov 12 20:44:53.934992 systemd[1]: Detected architecture x86-64. Nov 12 20:44:53.935001 systemd[1]: Running in initrd. Nov 12 20:44:53.935014 systemd[1]: No hostname configured, using default hostname. Nov 12 20:44:53.935023 systemd[1]: Hostname set to . Nov 12 20:44:53.935032 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:44:53.935040 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:44:53.935048 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:44:53.935057 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:44:53.935066 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:44:53.935075 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:44:53.935089 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:44:53.935100 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:44:53.935113 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:44:53.935124 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:44:53.935135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:44:53.935149 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:44:53.935160 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:44:53.935174 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:44:53.935185 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:44:53.935195 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:44:53.935206 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:44:53.935217 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:44:53.935228 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:44:53.935238 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:44:53.935249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:44:53.935264 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:44:53.935275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:44:53.935286 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:44:53.935304 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:44:53.935316 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:44:53.935327 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:44:53.935339 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:44:53.935351 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:44:53.935363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:44:53.935380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:44:53.935477 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:44:53.935489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:44:53.935501 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:44:53.935562 systemd-journald[190]: Collecting audit messages is disabled. Nov 12 20:44:53.935601 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:44:53.935614 systemd-journald[190]: Journal started Nov 12 20:44:53.935643 systemd-journald[190]: Runtime Journal (/run/log/journal/137415cdc42646f7838c609cbe17e563) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:44:53.928722 systemd-modules-load[194]: Inserted module 'overlay' Nov 12 20:44:53.938304 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:44:53.942196 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:44:53.943555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:44:53.960417 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:44:53.963619 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:44:53.968250 kernel: Bridge firewalling registered Nov 12 20:44:53.963956 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 12 20:44:53.966439 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:44:53.969572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:44:53.972239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:44:53.976234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:44:53.989693 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:44:53.992787 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:44:53.993090 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:44:53.997235 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:44:54.009986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:44:54.012957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:44:54.016096 dracut-cmdline[228]: dracut-dracut-053 Nov 12 20:44:54.019980 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:44:54.074558 systemd-resolved[236]: Positive Trust Anchors: Nov 12 20:44:54.074576 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:44:54.074617 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:44:54.087432 systemd-resolved[236]: Defaulting to hostname 'linux'. Nov 12 20:44:54.093922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:44:54.096335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:44:54.122434 kernel: SCSI subsystem initialized Nov 12 20:44:54.133433 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:44:54.145449 kernel: iscsi: registered transport (tcp) Nov 12 20:44:54.168782 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:44:54.168865 kernel: QLogic iSCSI HBA Driver Nov 12 20:44:54.223997 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:44:54.237707 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:44:54.266472 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:44:54.266575 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:44:54.268421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:44:54.313426 kernel: raid6: avx2x4 gen() 26077 MB/s Nov 12 20:44:54.330424 kernel: raid6: avx2x2 gen() 24968 MB/s Nov 12 20:44:54.347716 kernel: raid6: avx2x1 gen() 24174 MB/s Nov 12 20:44:54.347777 kernel: raid6: using algorithm avx2x4 gen() 26077 MB/s Nov 12 20:44:54.365691 kernel: raid6: .... xor() 7978 MB/s, rmw enabled Nov 12 20:44:54.365736 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:44:54.387422 kernel: xor: automatically using best checksumming function avx Nov 12 20:44:54.559440 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:44:54.574136 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:44:54.582817 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:44:54.604318 systemd-udevd[414]: Using default interface naming scheme 'v255'. Nov 12 20:44:54.611862 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:44:54.630610 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:44:54.646649 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Nov 12 20:44:54.687121 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:44:54.703713 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:44:54.773560 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:44:54.793583 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:44:54.806815 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:44:54.811270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:44:54.814733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:44:54.816435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:44:54.823430 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:44:54.843587 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:44:54.848528 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:44:54.848562 kernel: GPT:9289727 != 19775487 Nov 12 20:44:54.848576 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:44:54.848590 kernel: GPT:9289727 != 19775487 Nov 12 20:44:54.848603 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:44:54.848613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:44:54.848624 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:44:54.833815 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:44:54.852555 kernel: libata version 3.00 loaded. Nov 12 20:44:54.851471 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:44:54.855483 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:44:54.855512 kernel: AES CTR mode by8 optimization enabled Nov 12 20:44:54.858806 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:44:54.868215 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:44:54.906617 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:44:54.906657 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:44:54.906890 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:44:54.907095 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) Nov 12 20:44:54.907114 kernel: scsi host0: ahci Nov 12 20:44:54.907729 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (460) Nov 12 20:44:54.907756 kernel: scsi host1: ahci Nov 12 20:44:54.907977 kernel: scsi host2: ahci Nov 12 20:44:54.908188 kernel: scsi host3: ahci Nov 12 20:44:54.908424 kernel: scsi host4: ahci Nov 12 20:44:54.908660 kernel: scsi host5: ahci Nov 12 20:44:54.909002 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 12 20:44:54.909021 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 12 20:44:54.909043 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 12 20:44:54.909059 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 12 20:44:54.909075 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 12 20:44:54.909091 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 12 20:44:54.859051 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:44:54.861006 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:44:54.862552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:44:54.862853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:44:54.864819 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:44:54.873992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:44:54.892047 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:44:54.902566 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:44:54.902636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:44:54.907520 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:44:54.913562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:44:54.919644 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:44:54.927831 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:44:54.933833 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:44:54.933944 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:44:54.938612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:44:54.950593 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:44:54.953224 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:44:54.959609 disk-uuid[559]: Primary Header is updated. Nov 12 20:44:54.959609 disk-uuid[559]: Secondary Entries is updated. Nov 12 20:44:54.959609 disk-uuid[559]: Secondary Header is updated. Nov 12 20:44:54.962988 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:44:54.968421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:44:54.982576 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:44:55.214422 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:44:55.214525 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:44:55.215408 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:44:55.216417 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:44:55.222412 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:44:55.222427 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:44:55.223420 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:44:55.224640 kernel: ata3.00: applying bridge limits Nov 12 20:44:55.224654 kernel: ata3.00: configured for UDMA/100 Nov 12 20:44:55.225421 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:44:55.270430 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:44:55.284255 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:44:55.284282 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:44:55.972429 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:44:55.972604 disk-uuid[560]: The operation has completed successfully. Nov 12 20:44:56.007474 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:44:56.007669 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:44:56.039803 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:44:56.044247 sh[597]: Success Nov 12 20:44:56.059443 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:44:56.093975 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:44:56.110210 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:44:56.113411 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:44:56.126095 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:44:56.126136 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:44:56.126152 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:44:56.127165 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:44:56.127962 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:44:56.133006 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:44:56.134627 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:44:56.153550 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:44:56.155592 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:44:56.169974 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:44:56.170027 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:44:56.170038 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:44:56.172417 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:44:56.182977 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:44:56.184763 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:44:56.276681 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:44:56.289691 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:44:56.292752 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:44:56.297844 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:44:56.321427 systemd-networkd[775]: lo: Link UP Nov 12 20:44:56.321441 systemd-networkd[775]: lo: Gained carrier Nov 12 20:44:56.325512 systemd-networkd[775]: Enumeration completed Nov 12 20:44:56.325729 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:44:56.327696 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:44:56.327701 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:44:56.328974 systemd-networkd[775]: eth0: Link UP Nov 12 20:44:56.328979 systemd-networkd[775]: eth0: Gained carrier Nov 12 20:44:56.328988 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:44:56.329704 systemd[1]: Reached target network.target - Network. Nov 12 20:44:56.353642 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:44:56.370462 ignition[778]: Ignition 2.19.0 Nov 12 20:44:56.370474 ignition[778]: Stage: fetch-offline Nov 12 20:44:56.370524 ignition[778]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:44:56.370535 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:44:56.370639 ignition[778]: parsed url from cmdline: "" Nov 12 20:44:56.370643 ignition[778]: no config URL provided Nov 12 20:44:56.370649 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:44:56.370660 ignition[778]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:44:56.370690 ignition[778]: op(1): [started] loading QEMU firmware config module Nov 12 20:44:56.370699 ignition[778]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:44:56.378093 ignition[778]: op(1): [finished] loading QEMU firmware config module Nov 12 20:44:56.422148 ignition[778]: parsing config with SHA512: 0d88b879e3a59dafebb9c8641e37d9a149a476b2c8b436a4b756912360ce3eb200538c468919dad260b2c3a06ad67724a7eea58527aea826a97bfd2fc5fe9b2f Nov 12 20:44:56.425974 unknown[778]: fetched base config from "system" Nov 12 20:44:56.425995 unknown[778]: fetched user config from "qemu" Nov 12 20:44:56.426603 ignition[778]: fetch-offline: fetch-offline passed Nov 12 20:44:56.426721 ignition[778]: Ignition finished successfully Nov 12 20:44:56.429279 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:44:56.431447 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:44:56.439621 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:44:56.453649 ignition[789]: Ignition 2.19.0 Nov 12 20:44:56.453667 ignition[789]: Stage: kargs Nov 12 20:44:56.453888 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:44:56.453900 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:44:56.454930 ignition[789]: kargs: kargs passed Nov 12 20:44:56.454993 ignition[789]: Ignition finished successfully Nov 12 20:44:56.459081 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:44:56.473745 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:44:56.523880 ignition[797]: Ignition 2.19.0 Nov 12 20:44:56.523892 ignition[797]: Stage: disks Nov 12 20:44:56.524082 ignition[797]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:44:56.524093 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:44:56.527304 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:44:56.524982 ignition[797]: disks: disks passed Nov 12 20:44:56.529954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:44:56.525037 ignition[797]: Ignition finished successfully Nov 12 20:44:56.532256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:44:56.533762 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:44:56.535811 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:44:56.537036 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:44:56.550752 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:44:56.567292 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:44:56.575374 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:44:56.590530 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:44:56.679423 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:44:56.680233 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:44:56.681108 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:44:56.694636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:44:56.696189 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:44:56.697772 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:44:56.697829 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:44:56.697859 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:44:56.705894 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:44:56.707999 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:44:56.714415 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Nov 12 20:44:56.717553 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:44:56.717579 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:44:56.717591 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:44:56.720428 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:44:56.723258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:44:56.769246 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:44:56.774497 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:44:56.779441 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:44:56.783862 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:44:56.869932 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:44:56.887512 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:44:56.890348 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:44:56.899418 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:44:56.917108 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:44:56.922106 ignition[930]: INFO : Ignition 2.19.0 Nov 12 20:44:56.922106 ignition[930]: INFO : Stage: mount Nov 12 20:44:56.923786 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:44:56.923786 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:44:56.923786 ignition[930]: INFO : mount: mount passed Nov 12 20:44:56.923786 ignition[930]: INFO : Ignition finished successfully Nov 12 20:44:56.929590 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:44:56.936622 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:44:57.125732 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:44:57.139667 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:44:57.148448 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Nov 12 20:44:57.148506 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:44:57.148532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:44:57.149941 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:44:57.152412 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:44:57.154372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:44:57.187534 ignition[960]: INFO : Ignition 2.19.0 Nov 12 20:44:57.187534 ignition[960]: INFO : Stage: files Nov 12 20:44:57.189400 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:44:57.189400 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:44:57.189400 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:44:57.193104 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:44:57.193104 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:44:57.196284 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:44:57.197880 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:44:57.199991 unknown[960]: wrote ssh authorized keys file for user: core Nov 12 20:44:57.201332 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:44:57.203738 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:44:57.205736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:44:57.207777 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:44:57.210033 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:44:57.261217 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:44:57.445714 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:44:57.445714 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:44:57.450320 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:44:57.479543 systemd-networkd[775]: eth0: Gained IPv6LL Nov 12 20:44:57.782344 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:44:58.438528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:44:58.438528 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 12 20:44:58.442633 ignition[960]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:44:58.486210 ignition[960]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:44:58.509233 ignition[960]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:44:58.510968 ignition[960]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:44:58.510968 ignition[960]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:44:58.513782 ignition[960]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:44:58.515224 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:44:58.517218 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:44:58.518925 ignition[960]: INFO : files: files passed Nov 12 20:44:58.519680 ignition[960]: INFO : Ignition finished successfully Nov 12 20:44:58.523206 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:44:58.560890 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:44:58.562247 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:44:58.574200 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:44:58.574519 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:44:58.578965 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:44:58.585438 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:44:58.585438 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:44:58.589346 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:44:58.593818 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:44:58.595497 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:44:58.609849 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:44:58.654911 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:44:58.655085 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:44:58.657872 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:44:58.660376 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:44:58.662901 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:44:58.664227 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:44:58.688218 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:44:58.701768 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:44:58.715765 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:44:58.716056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:44:58.719603 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:44:58.721781 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:44:58.721919 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:44:58.726028 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:44:58.726224 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:44:58.726746 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:44:58.727112 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:44:58.727477 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:44:58.727985 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:44:58.736894 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:44:58.740004 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:44:58.742236 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:44:58.744204 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:44:58.745964 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:44:58.746102 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:44:58.748817 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:44:58.750903 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:44:58.753106 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:44:58.753218 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:44:58.755488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:44:58.755606 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:44:58.759972 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:44:58.760135 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:44:58.761172 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:44:58.763192 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:44:58.766536 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:44:58.769873 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:44:58.770895 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:44:58.773632 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:44:58.773794 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:44:58.774749 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:44:58.774882 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:44:58.776767 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:44:58.776891 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:44:58.780429 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:44:58.780587 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:44:58.799760 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:44:58.800935 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:44:58.801137 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:44:58.804449 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:44:58.806433 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:44:58.806572 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:44:58.831574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:44:58.831683 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:44:58.840472 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:44:58.840608 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:44:58.858701 ignition[1014]: INFO : Ignition 2.19.0 Nov 12 20:44:58.858701 ignition[1014]: INFO : Stage: umount Nov 12 20:44:58.869478 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:44:58.869478 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:44:58.869478 ignition[1014]: INFO : umount: umount passed Nov 12 20:44:58.869478 ignition[1014]: INFO : Ignition finished successfully Nov 12 20:44:58.870769 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:44:58.876756 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:44:58.876932 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:44:58.878150 systemd[1]: Stopped target network.target - Network. Nov 12 20:44:58.880790 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:44:58.880859 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:44:58.882794 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:44:58.882849 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:44:58.883825 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:44:58.883884 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:44:58.884148 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:44:58.884202 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:44:58.884846 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:44:58.890605 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:44:58.901233 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:44:58.901456 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:44:58.902534 systemd-networkd[775]: eth0: DHCPv6 lease lost Nov 12 20:44:58.905157 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:44:58.905337 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:44:58.907750 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:44:58.907840 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:44:58.915528 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:44:58.917622 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:44:58.917695 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:44:58.920130 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:44:58.920208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:44:58.923052 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:44:58.923124 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:44:58.925468 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:44:58.925526 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:44:58.928298 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:44:58.944947 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:44:58.946096 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:44:58.948447 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:44:58.949506 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:44:58.984245 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:44:58.985349 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:44:58.987600 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:44:58.987649 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:44:58.990676 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:44:58.990735 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:44:58.993782 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:44:58.993840 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:44:58.996804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:44:58.996862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:44:59.018615 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:44:59.021050 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:44:59.021137 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:44:59.025040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:44:59.026197 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:44:59.029687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:44:59.031092 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:44:59.153188 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:44:59.153381 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:44:59.156437 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:44:59.157591 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:44:59.157688 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:44:59.167843 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:44:59.181219 systemd[1]: Switching root. Nov 12 20:44:59.246416 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Nov 12 20:44:59.246497 systemd-journald[190]: Journal stopped Nov 12 20:45:01.618145 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:45:01.618247 kernel: SELinux: policy capability open_perms=1 Nov 12 20:45:01.618263 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:45:01.618283 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:45:01.618297 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:45:01.618312 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:45:01.618326 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:45:01.618360 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:45:01.618379 kernel: audit: type=1403 audit(1731444300.640:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:45:01.618411 systemd[1]: Successfully loaded SELinux policy in 66.209ms. Nov 12 20:45:01.619477 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.789ms. Nov 12 20:45:01.619503 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:45:01.619521 systemd[1]: Detected virtualization kvm. Nov 12 20:45:01.619538 systemd[1]: Detected architecture x86-64. Nov 12 20:45:01.619555 systemd[1]: Detected first boot. Nov 12 20:45:01.619572 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:45:01.619588 zram_generator::config[1080]: No configuration found. Nov 12 20:45:01.619613 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:45:01.619630 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:45:01.619653 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:45:01.619671 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:45:01.619689 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:45:01.619706 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:45:01.619723 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:45:01.619740 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:45:01.619763 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:45:01.619782 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:45:01.619800 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:45:01.619817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:45:01.619834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:45:01.619851 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:45:01.619875 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:45:01.619899 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:45:01.619916 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:45:01.619937 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:45:01.619954 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:45:01.619971 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:45:01.619988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:45:01.620005 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:45:01.620022 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:45:01.620039 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:45:01.620056 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:45:01.620077 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:45:01.620094 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:45:01.620112 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:45:01.620129 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:45:01.620147 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:45:01.620170 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:45:01.620187 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:45:01.620204 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:45:01.620221 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:45:01.620242 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:45:01.620260 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:45:01.620277 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:45:01.620294 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:45:01.620310 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:45:01.620327 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:45:01.620344 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:45:01.620372 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:45:01.620404 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:45:01.620429 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:45:01.620445 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:45:01.620462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:45:01.620479 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:45:01.620496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:45:01.620514 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:45:01.620531 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 20:45:01.620551 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 20:45:01.620575 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:45:01.620625 systemd-journald[1154]: Collecting audit messages is disabled. Nov 12 20:45:01.620658 kernel: loop: module loaded Nov 12 20:45:01.620675 kernel: fuse: init (API version 7.39) Nov 12 20:45:01.620692 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:45:01.620708 systemd-journald[1154]: Journal started Nov 12 20:45:01.620742 systemd-journald[1154]: Runtime Journal (/run/log/journal/137415cdc42646f7838c609cbe17e563) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:45:01.651946 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:45:01.659407 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:45:01.671200 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:45:01.671274 kernel: ACPI: bus type drm_connector registered Nov 12 20:45:01.672402 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:45:01.679408 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:45:01.681531 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:45:01.682775 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:45:01.683987 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:45:01.686019 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:45:01.687518 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:45:01.689433 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:45:01.691168 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:45:01.692974 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:45:01.693259 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:45:01.695473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:45:01.695765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:45:01.697640 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:45:01.697905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:45:01.699858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:45:01.700141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:45:01.702245 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:45:01.702537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:45:01.704223 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:45:01.704536 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:45:01.706456 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:45:01.708585 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:45:01.710307 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:45:01.712264 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:45:01.726976 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:45:01.735593 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:45:01.738301 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:45:01.739753 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:45:01.743548 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:45:01.747525 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:45:01.759550 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:45:01.765523 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:45:01.766981 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:45:01.768922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:45:01.777121 systemd-journald[1154]: Time spent on flushing to /var/log/journal/137415cdc42646f7838c609cbe17e563 is 36.620ms for 982 entries. Nov 12 20:45:01.777121 systemd-journald[1154]: System Journal (/var/log/journal/137415cdc42646f7838c609cbe17e563) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:45:01.834767 systemd-journald[1154]: Received client request to flush runtime journal. Nov 12 20:45:01.773598 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:45:01.777084 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:45:01.780315 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:45:01.794395 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:45:01.796580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:45:01.799177 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:45:01.806802 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:45:01.841970 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:45:01.844441 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:45:01.847665 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:45:01.849680 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Nov 12 20:45:01.849699 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Nov 12 20:45:01.856297 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:45:01.862778 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:45:01.889858 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:45:01.897613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:45:01.920629 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Nov 12 20:45:01.920651 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Nov 12 20:45:01.929406 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:45:02.509920 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:45:02.518601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:45:02.550786 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Nov 12 20:45:02.570854 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:45:02.585671 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:45:02.604705 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:45:02.614482 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 12 20:45:02.620543 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1247) Nov 12 20:45:02.623601 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1247) Nov 12 20:45:02.678419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1246) Nov 12 20:45:02.696226 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:45:02.748374 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:45:02.751437 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:45:02.752123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:45:02.767424 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:45:02.851556 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 12 20:45:02.857651 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:45:02.857936 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:45:02.858164 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:45:02.858370 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:45:02.864712 systemd-networkd[1250]: lo: Link UP Nov 12 20:45:02.864729 systemd-networkd[1250]: lo: Gained carrier Nov 12 20:45:02.867457 systemd-networkd[1250]: Enumeration completed Nov 12 20:45:02.868021 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:45:02.868028 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:45:02.869504 systemd-networkd[1250]: eth0: Link UP Nov 12 20:45:02.869510 systemd-networkd[1250]: eth0: Gained carrier Nov 12 20:45:02.869526 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:45:02.918091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:45:02.920889 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:45:02.930181 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:45:02.943690 systemd-networkd[1250]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:45:02.999882 kernel: kvm_amd: TSC scaling supported Nov 12 20:45:02.999989 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:45:03.000044 kernel: kvm_amd: Nested Paging enabled Nov 12 20:45:03.000064 kernel: kvm_amd: LBR virtualization supported Nov 12 20:45:03.000470 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:45:03.001841 kernel: kvm_amd: Virtual GIF supported Nov 12 20:45:03.029431 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:45:03.048961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:45:03.060181 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:45:03.072580 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:45:03.085860 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:45:03.133502 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:45:03.136352 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:45:03.149769 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:45:03.158343 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:45:03.195885 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:45:03.197897 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:45:03.199494 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:45:03.199529 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:45:03.200727 systemd[1]: Reached target machines.target - Containers. Nov 12 20:45:03.203148 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:45:03.214799 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:45:03.218855 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:45:03.220243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:45:03.221953 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:45:03.225250 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:45:03.231281 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:45:03.234474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:45:03.249088 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:45:03.255436 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 20:45:03.273870 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:45:03.274983 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:45:03.284418 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:45:03.317421 kernel: loop1: detected capacity change from 0 to 211296 Nov 12 20:45:03.364417 kernel: loop2: detected capacity change from 0 to 140768 Nov 12 20:45:03.455436 kernel: loop3: detected capacity change from 0 to 142488 Nov 12 20:45:03.474410 kernel: loop4: detected capacity change from 0 to 211296 Nov 12 20:45:03.482413 kernel: loop5: detected capacity change from 0 to 140768 Nov 12 20:45:03.491332 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:45:03.492067 (sd-merge)[1311]: Merged extensions into '/usr'. Nov 12 20:45:03.497593 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:45:03.497633 systemd[1]: Reloading... Nov 12 20:45:03.614156 zram_generator::config[1339]: No configuration found. Nov 12 20:45:03.774288 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:45:03.831817 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:45:03.915950 systemd[1]: Reloading finished in 417 ms. Nov 12 20:45:03.942979 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:45:03.945039 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:45:03.964806 systemd[1]: Starting ensure-sysext.service... Nov 12 20:45:03.968310 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:45:03.976115 systemd[1]: Reloading requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:45:03.976142 systemd[1]: Reloading... Nov 12 20:45:04.051694 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:45:04.052213 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:45:04.053610 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:45:04.054035 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 12 20:45:04.054150 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 12 20:45:04.061778 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:45:04.061797 systemd-tmpfiles[1384]: Skipping /boot Nov 12 20:45:04.077964 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:45:04.077988 systemd-tmpfiles[1384]: Skipping /boot Nov 12 20:45:04.085414 zram_generator::config[1415]: No configuration found. Nov 12 20:45:04.207340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:45:04.263594 systemd-networkd[1250]: eth0: Gained IPv6LL Nov 12 20:45:04.281757 systemd[1]: Reloading finished in 304 ms. Nov 12 20:45:04.303909 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:45:04.318019 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:45:04.333641 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:45:04.336709 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:45:04.339501 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:45:04.376027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:45:04.455334 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:45:04.463810 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:45:04.463990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:45:04.479246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:45:04.548722 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:45:04.557582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:45:04.558942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:45:04.559057 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:45:04.561511 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:45:04.563574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:45:04.563919 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:45:04.566184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:45:04.566441 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:45:04.635983 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:45:04.636420 augenrules[1487]: No rules Nov 12 20:45:04.636262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:45:04.638305 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:45:04.645448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:45:04.645767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:45:04.654733 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:45:04.656737 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:45:04.658625 systemd-resolved[1463]: Positive Trust Anchors: Nov 12 20:45:04.658652 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:45:04.658693 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:45:04.662649 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:45:04.663861 systemd-resolved[1463]: Defaulting to hostname 'linux'. Nov 12 20:45:04.682820 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:45:04.686342 systemd[1]: Reached target network.target - Network. Nov 12 20:45:04.687544 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:45:04.688807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:45:04.690106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:45:04.690304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:45:04.706651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:45:04.710616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:45:04.712815 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:45:04.713916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:45:04.714131 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:45:04.714407 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:45:04.716085 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:45:04.718018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:45:04.718230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:45:04.720207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:45:04.720679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:45:04.722856 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:45:04.723128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:45:04.731025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:45:04.731428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:45:04.741603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:45:04.754073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:45:04.775480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:45:04.777815 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:45:04.779168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:45:04.779303 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:45:04.779397 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:45:04.780566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:45:04.780833 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:45:04.782998 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:45:04.783261 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:45:04.802167 systemd[1]: Finished ensure-sysext.service. Nov 12 20:45:04.805030 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:45:04.805348 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:45:04.807730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:45:04.807966 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:45:04.813349 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:45:04.813479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:45:04.825596 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:45:04.916101 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:45:04.960705 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:45:05.414803 systemd-resolved[1463]: Clock change detected. Flushing caches. Nov 12 20:45:05.414823 systemd-timesyncd[1533]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:45:05.414890 systemd-timesyncd[1533]: Initial clock synchronization to Tue 2024-11-12 20:45:05.414658 UTC. Nov 12 20:45:05.415365 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:45:05.416731 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:45:05.418151 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:45:05.419776 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:45:05.419812 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:45:05.420879 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:45:05.422310 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:45:05.423696 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:45:05.425060 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:45:05.426899 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:45:05.430442 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:45:05.432970 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:45:05.441762 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:45:05.465536 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:45:05.466551 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:45:05.467702 systemd[1]: System is tainted: cgroupsv1 Nov 12 20:45:05.467748 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:45:05.467774 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:45:05.469110 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:45:05.471570 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:45:05.473999 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:45:05.476148 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:45:05.481405 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:45:05.500947 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:45:05.503735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:05.505211 jq[1540]: false Nov 12 20:45:05.507363 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:45:05.511096 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:45:05.520002 extend-filesystems[1542]: Found loop3 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found loop4 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found loop5 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found sr0 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found vda Nov 12 20:45:05.520002 extend-filesystems[1542]: Found vda1 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found vda2 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found vda3 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found usr Nov 12 20:45:05.520002 extend-filesystems[1542]: Found vda4 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found vda6 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found vda7 Nov 12 20:45:05.520002 extend-filesystems[1542]: Found vda9 Nov 12 20:45:05.520002 extend-filesystems[1542]: Checking size of /dev/vda9 Nov 12 20:45:05.583311 dbus-daemon[1539]: [system] SELinux support is enabled Nov 12 20:45:05.520576 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:45:05.583715 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:45:05.587661 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:45:05.591863 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:45:05.593562 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:45:05.595071 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:45:05.599597 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:45:05.601672 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:45:05.614654 extend-filesystems[1542]: Resized partition /dev/vda9 Nov 12 20:45:05.663040 jq[1566]: true Nov 12 20:45:05.610872 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:45:05.663274 extend-filesystems[1577]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:45:05.666303 update_engine[1565]: I20241112 20:45:05.633953 1565 main.cc:92] Flatcar Update Engine starting Nov 12 20:45:05.666303 update_engine[1565]: I20241112 20:45:05.637331 1565 update_check_scheduler.cc:74] Next update check in 3m23s Nov 12 20:45:05.611236 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:45:05.614040 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:45:05.614366 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:45:05.633900 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:45:05.634236 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:45:05.645270 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:45:05.661910 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:45:05.670003 jq[1584]: true Nov 12 20:45:05.697646 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:45:05.697732 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1242) Nov 12 20:45:05.676983 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:45:05.677358 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:45:05.710725 tar[1581]: linux-amd64/helm Nov 12 20:45:05.711136 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:45:05.713266 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:45:05.713384 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:45:05.713415 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:45:05.714894 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:45:05.714910 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:45:05.717207 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:45:05.781094 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:45:06.103956 systemd-logind[1564]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:45:06.103989 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:45:06.108565 systemd-logind[1564]: New seat seat0. Nov 12 20:45:06.112987 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:45:06.116254 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:45:06.126484 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:45:06.145745 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:45:06.151678 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:45:06.154616 extend-filesystems[1577]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:45:06.154616 extend-filesystems[1577]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:45:06.154616 extend-filesystems[1577]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:45:06.161175 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Nov 12 20:45:06.162184 bash[1620]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:45:06.163891 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:45:06.165592 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:45:06.166019 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:45:06.169260 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:45:06.174883 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:45:06.180764 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:45:06.181136 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:45:06.206782 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:45:06.231018 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:45:06.260029 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:45:06.267081 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:45:06.268552 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:45:06.500365 containerd[1586]: time="2024-11-12T20:45:06.499871799Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:45:06.566686 tar[1581]: linux-amd64/LICENSE Nov 12 20:45:06.566686 tar[1581]: linux-amd64/README.md Nov 12 20:45:06.593081 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:45:06.595012 containerd[1586]: time="2024-11-12T20:45:06.594928149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:45:06.597162 containerd[1586]: time="2024-11-12T20:45:06.597102530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:45:06.597162 containerd[1586]: time="2024-11-12T20:45:06.597151652Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:45:06.597162 containerd[1586]: time="2024-11-12T20:45:06.597176860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:45:06.597413 containerd[1586]: time="2024-11-12T20:45:06.597393927Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:45:06.597437 containerd[1586]: time="2024-11-12T20:45:06.597415267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:45:06.597545 containerd[1586]: time="2024-11-12T20:45:06.597519563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:45:06.597569 containerd[1586]: time="2024-11-12T20:45:06.597545281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:45:06.597892 containerd[1586]: time="2024-11-12T20:45:06.597859972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:45:06.597892 containerd[1586]: time="2024-11-12T20:45:06.597884217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:45:06.597937 containerd[1586]: time="2024-11-12T20:45:06.597912691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:45:06.597937 containerd[1586]: time="2024-11-12T20:45:06.597926937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:45:06.598067 containerd[1586]: time="2024-11-12T20:45:06.598041683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:45:06.598356 containerd[1586]: time="2024-11-12T20:45:06.598328792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:45:06.598670 containerd[1586]: time="2024-11-12T20:45:06.598639885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:45:06.598670 containerd[1586]: time="2024-11-12T20:45:06.598664221Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:45:06.598799 containerd[1586]: time="2024-11-12T20:45:06.598769739Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:45:06.598874 containerd[1586]: time="2024-11-12T20:45:06.598850831Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:45:06.607514 containerd[1586]: time="2024-11-12T20:45:06.607430083Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:45:06.607646 containerd[1586]: time="2024-11-12T20:45:06.607539418Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:45:06.607646 containerd[1586]: time="2024-11-12T20:45:06.607557883Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:45:06.607646 containerd[1586]: time="2024-11-12T20:45:06.607578561Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:45:06.607646 containerd[1586]: time="2024-11-12T20:45:06.607594051Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:45:06.607868 containerd[1586]: time="2024-11-12T20:45:06.607840323Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:45:06.609248 containerd[1586]: time="2024-11-12T20:45:06.609179947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:45:06.609519 containerd[1586]: time="2024-11-12T20:45:06.609494567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:45:06.609554 containerd[1586]: time="2024-11-12T20:45:06.609523912Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:45:06.609554 containerd[1586]: time="2024-11-12T20:45:06.609543930Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:45:06.609629 containerd[1586]: time="2024-11-12T20:45:06.609565801Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:45:06.609629 containerd[1586]: time="2024-11-12T20:45:06.609591419Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:45:06.609629 containerd[1586]: time="2024-11-12T20:45:06.609614352Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:45:06.609706 containerd[1586]: time="2024-11-12T20:45:06.609640832Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:45:06.609706 containerd[1586]: time="2024-11-12T20:45:06.609666811Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:45:06.609706 containerd[1586]: time="2024-11-12T20:45:06.609687600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:45:06.609774 containerd[1586]: time="2024-11-12T20:45:06.609705403Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:45:06.609774 containerd[1586]: time="2024-11-12T20:45:06.609727565Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:45:06.609774 containerd[1586]: time="2024-11-12T20:45:06.609767189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.609852 containerd[1586]: time="2024-11-12T20:45:06.609787627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.609852 containerd[1586]: time="2024-11-12T20:45:06.609806162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.609852 containerd[1586]: time="2024-11-12T20:45:06.609843222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.609945 containerd[1586]: time="2024-11-12T20:45:06.609862348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.609945 containerd[1586]: time="2024-11-12T20:45:06.609881534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.609945 containerd[1586]: time="2024-11-12T20:45:06.609897894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.609945 containerd[1586]: time="2024-11-12T20:45:06.609921068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.609945 containerd[1586]: time="2024-11-12T20:45:06.609938160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.610062 containerd[1586]: time="2024-11-12T20:45:06.609960512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.610062 containerd[1586]: time="2024-11-12T20:45:06.609977534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.610062 containerd[1586]: time="2024-11-12T20:45:06.609995738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.610062 containerd[1586]: time="2024-11-12T20:45:06.610015415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.610062 containerd[1586]: time="2024-11-12T20:45:06.610038739Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:45:06.610174 containerd[1586]: time="2024-11-12T20:45:06.610073654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.610174 containerd[1586]: time="2024-11-12T20:45:06.610092439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.610174 containerd[1586]: time="2024-11-12T20:45:06.610109822Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:45:06.610241 containerd[1586]: time="2024-11-12T20:45:06.610199090Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:45:06.610241 containerd[1586]: time="2024-11-12T20:45:06.610227473Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:45:06.610286 containerd[1586]: time="2024-11-12T20:45:06.610244605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:45:06.610286 containerd[1586]: time="2024-11-12T20:45:06.610262699Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:45:06.610286 containerd[1586]: time="2024-11-12T20:45:06.610276896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.610380 containerd[1586]: time="2024-11-12T20:45:06.610311681Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:45:06.610380 containerd[1586]: time="2024-11-12T20:45:06.610326980Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:45:06.610380 containerd[1586]: time="2024-11-12T20:45:06.610342308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:45:06.610920 containerd[1586]: time="2024-11-12T20:45:06.610811208Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:45:06.610920 containerd[1586]: time="2024-11-12T20:45:06.610921606Z" level=info msg="Connect containerd service" Nov 12 20:45:06.611124 containerd[1586]: time="2024-11-12T20:45:06.610986968Z" level=info msg="using legacy CRI server" Nov 12 20:45:06.611124 containerd[1586]: time="2024-11-12T20:45:06.611002257Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:45:06.611180 containerd[1586]: time="2024-11-12T20:45:06.611119717Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:45:06.611867 containerd[1586]: time="2024-11-12T20:45:06.611825993Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:45:06.612210 containerd[1586]: time="2024-11-12T20:45:06.612085019Z" level=info msg="Start subscribing containerd event" Nov 12 20:45:06.612210 containerd[1586]: time="2024-11-12T20:45:06.612166672Z" level=info msg="Start recovering state" Nov 12 20:45:06.612357 containerd[1586]: time="2024-11-12T20:45:06.612322164Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:45:06.612480 containerd[1586]: time="2024-11-12T20:45:06.612412704Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:45:06.612480 containerd[1586]: time="2024-11-12T20:45:06.612434996Z" level=info msg="Start event monitor" Nov 12 20:45:06.612630 containerd[1586]: time="2024-11-12T20:45:06.612589275Z" level=info msg="Start snapshots syncer" Nov 12 20:45:06.612630 containerd[1586]: time="2024-11-12T20:45:06.612609223Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:45:06.613048 containerd[1586]: time="2024-11-12T20:45:06.612908645Z" level=info msg="Start streaming server" Nov 12 20:45:06.613119 containerd[1586]: time="2024-11-12T20:45:06.613085587Z" level=info msg="containerd successfully booted in 0.118098s" Nov 12 20:45:06.613334 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:45:07.390807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:07.392802 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:45:07.394193 systemd[1]: Startup finished in 8.056s (kernel) + 6.365s (userspace) = 14.421s. Nov 12 20:45:07.398012 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:45:07.810600 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:45:07.878948 systemd[1]: Started sshd@0-10.0.0.56:22-10.0.0.1:49814.service - OpenSSH per-connection server daemon (10.0.0.1:49814). Nov 12 20:45:07.978376 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 49814 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:07.981654 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:07.994565 systemd-logind[1564]: New session 1 of user core. Nov 12 20:45:07.996094 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:45:08.002773 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:45:08.064208 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:45:08.072733 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:45:08.076056 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:45:08.264627 systemd[1697]: Queued start job for default target default.target. Nov 12 20:45:08.265256 systemd[1697]: Created slice app.slice - User Application Slice. Nov 12 20:45:08.265326 systemd[1697]: Reached target paths.target - Paths. Nov 12 20:45:08.265346 systemd[1697]: Reached target timers.target - Timers. Nov 12 20:45:08.273627 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:45:08.284575 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:45:08.284810 systemd[1697]: Reached target sockets.target - Sockets. Nov 12 20:45:08.284885 systemd[1697]: Reached target basic.target - Basic System. Nov 12 20:45:08.284988 systemd[1697]: Reached target default.target - Main User Target. Nov 12 20:45:08.285073 systemd[1697]: Startup finished in 201ms. Nov 12 20:45:08.285970 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:45:08.296906 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:45:08.327147 kubelet[1679]: E1112 20:45:08.326963 1679 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:45:08.333283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:45:08.333713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:45:08.398964 systemd[1]: Started sshd@1-10.0.0.56:22-10.0.0.1:49816.service - OpenSSH per-connection server daemon (10.0.0.1:49816). Nov 12 20:45:08.435076 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 49816 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:08.436952 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:08.442351 systemd-logind[1564]: New session 2 of user core. Nov 12 20:45:08.452055 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:45:08.509891 sshd[1712]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:08.531944 systemd[1]: Started sshd@2-10.0.0.56:22-10.0.0.1:49828.service - OpenSSH per-connection server daemon (10.0.0.1:49828). Nov 12 20:45:08.532574 systemd[1]: sshd@1-10.0.0.56:22-10.0.0.1:49816.service: Deactivated successfully. Nov 12 20:45:08.535922 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:45:08.537186 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:45:08.538125 systemd-logind[1564]: Removed session 2. Nov 12 20:45:08.567184 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 49828 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:08.569407 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:08.574399 systemd-logind[1564]: New session 3 of user core. Nov 12 20:45:08.588018 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:45:08.642767 sshd[1717]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:08.650875 systemd[1]: Started sshd@3-10.0.0.56:22-10.0.0.1:49836.service - OpenSSH per-connection server daemon (10.0.0.1:49836). Nov 12 20:45:08.651524 systemd[1]: sshd@2-10.0.0.56:22-10.0.0.1:49828.service: Deactivated successfully. Nov 12 20:45:08.654320 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:45:08.655678 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:45:08.657055 systemd-logind[1564]: Removed session 3. Nov 12 20:45:08.688690 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 49836 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:08.690789 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:08.695516 systemd-logind[1564]: New session 4 of user core. Nov 12 20:45:08.705804 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:45:08.763697 sshd[1725]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:08.776837 systemd[1]: Started sshd@4-10.0.0.56:22-10.0.0.1:49848.service - OpenSSH per-connection server daemon (10.0.0.1:49848). Nov 12 20:45:08.777509 systemd[1]: sshd@3-10.0.0.56:22-10.0.0.1:49836.service: Deactivated successfully. Nov 12 20:45:08.781194 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:45:08.782341 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:45:08.783689 systemd-logind[1564]: Removed session 4. Nov 12 20:45:08.814072 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 49848 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:08.815982 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:08.821009 systemd-logind[1564]: New session 5 of user core. Nov 12 20:45:08.830943 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:45:08.892294 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:45:08.892736 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:45:08.909096 sudo[1740]: pam_unix(sudo:session): session closed for user root Nov 12 20:45:08.911262 sshd[1733]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:08.919795 systemd[1]: Started sshd@5-10.0.0.56:22-10.0.0.1:49862.service - OpenSSH per-connection server daemon (10.0.0.1:49862). Nov 12 20:45:08.920523 systemd[1]: sshd@4-10.0.0.56:22-10.0.0.1:49848.service: Deactivated successfully. Nov 12 20:45:08.923680 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:45:08.924763 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:45:08.925802 systemd-logind[1564]: Removed session 5. Nov 12 20:45:08.953629 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 49862 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:08.955400 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:08.959839 systemd-logind[1564]: New session 6 of user core. Nov 12 20:45:08.970811 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:45:09.027849 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:45:09.028230 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:45:09.032349 sudo[1750]: pam_unix(sudo:session): session closed for user root Nov 12 20:45:09.039353 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:45:09.039758 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:45:09.060843 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:45:09.063135 auditctl[1753]: No rules Nov 12 20:45:09.065125 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:45:09.065604 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:45:09.068475 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:45:09.108285 augenrules[1772]: No rules Nov 12 20:45:09.110644 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:45:09.112155 sudo[1749]: pam_unix(sudo:session): session closed for user root Nov 12 20:45:09.114393 sshd[1742]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:09.128875 systemd[1]: Started sshd@6-10.0.0.56:22-10.0.0.1:49866.service - OpenSSH per-connection server daemon (10.0.0.1:49866). Nov 12 20:45:09.129587 systemd[1]: sshd@5-10.0.0.56:22-10.0.0.1:49862.service: Deactivated successfully. Nov 12 20:45:09.133309 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:45:09.134889 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:45:09.136305 systemd-logind[1564]: Removed session 6. Nov 12 20:45:09.166275 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 49866 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:09.168270 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:09.173021 systemd-logind[1564]: New session 7 of user core. Nov 12 20:45:09.186773 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:45:09.241427 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:45:09.241823 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:45:10.184725 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:45:10.185053 (dockerd)[1804]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:45:10.579674 dockerd[1804]: time="2024-11-12T20:45:10.579515094Z" level=info msg="Starting up" Nov 12 20:45:12.137395 dockerd[1804]: time="2024-11-12T20:45:12.137308669Z" level=info msg="Loading containers: start." Nov 12 20:45:12.278615 kernel: Initializing XFRM netlink socket Nov 12 20:45:12.392396 systemd-networkd[1250]: docker0: Link UP Nov 12 20:45:12.417810 dockerd[1804]: time="2024-11-12T20:45:12.417755788Z" level=info msg="Loading containers: done." Nov 12 20:45:12.776194 dockerd[1804]: time="2024-11-12T20:45:12.775955181Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:45:12.776194 dockerd[1804]: time="2024-11-12T20:45:12.776108809Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:45:12.776529 dockerd[1804]: time="2024-11-12T20:45:12.776291352Z" level=info msg="Daemon has completed initialization" Nov 12 20:45:12.883566 dockerd[1804]: time="2024-11-12T20:45:12.883379203Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:45:12.883882 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:45:14.000921 containerd[1586]: time="2024-11-12T20:45:14.000855787Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:45:16.442938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266210195.mount: Deactivated successfully. Nov 12 20:45:18.583729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:45:18.597676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:18.786248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:18.791732 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:45:18.937129 kubelet[1998]: E1112 20:45:18.936853 1998 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:45:18.944754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:45:18.945055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:45:22.178616 containerd[1586]: time="2024-11-12T20:45:22.178528154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:22.212218 containerd[1586]: time="2024-11-12T20:45:22.212086405Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 20:45:22.296996 containerd[1586]: time="2024-11-12T20:45:22.296931802Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:22.348734 containerd[1586]: time="2024-11-12T20:45:22.348646369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:22.350230 containerd[1586]: time="2024-11-12T20:45:22.350157525Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 8.349236045s" Nov 12 20:45:22.350230 containerd[1586]: time="2024-11-12T20:45:22.350234009Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:45:22.377888 containerd[1586]: time="2024-11-12T20:45:22.377842253Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:45:25.005944 containerd[1586]: time="2024-11-12T20:45:25.005821116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:25.011744 containerd[1586]: time="2024-11-12T20:45:25.011604420Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 20:45:25.016562 containerd[1586]: time="2024-11-12T20:45:25.016356268Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:25.023998 containerd[1586]: time="2024-11-12T20:45:25.023858528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:25.026254 containerd[1586]: time="2024-11-12T20:45:25.026173503Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 2.648280554s" Nov 12 20:45:25.026254 containerd[1586]: time="2024-11-12T20:45:25.026243184Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:45:25.127319 containerd[1586]: time="2024-11-12T20:45:25.127215036Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:45:28.046303 containerd[1586]: time="2024-11-12T20:45:28.046196368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:28.080109 containerd[1586]: time="2024-11-12T20:45:28.079962650Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 20:45:28.107720 containerd[1586]: time="2024-11-12T20:45:28.107561578Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:28.150612 containerd[1586]: time="2024-11-12T20:45:28.150503340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:28.152272 containerd[1586]: time="2024-11-12T20:45:28.152209412Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 3.024934954s" Nov 12 20:45:28.152272 containerd[1586]: time="2024-11-12T20:45:28.152249778Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:45:28.183579 containerd[1586]: time="2024-11-12T20:45:28.183506670Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:45:28.952595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:45:28.971717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:29.571771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:29.577507 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:45:29.680223 kubelet[2076]: E1112 20:45:29.680127 2076 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:45:29.685895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:45:29.686240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:45:32.103037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754235410.mount: Deactivated successfully. Nov 12 20:45:33.220299 containerd[1586]: time="2024-11-12T20:45:33.220232244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:33.268093 containerd[1586]: time="2024-11-12T20:45:33.267941223Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 20:45:33.345127 containerd[1586]: time="2024-11-12T20:45:33.345050250Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:33.428378 containerd[1586]: time="2024-11-12T20:45:33.428269926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:33.429245 containerd[1586]: time="2024-11-12T20:45:33.429181186Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 5.245628349s" Nov 12 20:45:33.429245 containerd[1586]: time="2024-11-12T20:45:33.429233394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:45:33.459759 containerd[1586]: time="2024-11-12T20:45:33.459712046Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:45:39.695892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:45:39.705761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:39.709820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3659366696.mount: Deactivated successfully. Nov 12 20:45:39.857616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:39.877041 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:45:39.955344 kubelet[2116]: E1112 20:45:39.955274 2116 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:45:39.960526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:45:39.960919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:45:45.248786 containerd[1586]: time="2024-11-12T20:45:45.248704788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:45.327350 containerd[1586]: time="2024-11-12T20:45:45.327235328Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:45:45.398533 containerd[1586]: time="2024-11-12T20:45:45.398432590Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:45.444318 containerd[1586]: time="2024-11-12T20:45:45.444207181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:45.445858 containerd[1586]: time="2024-11-12T20:45:45.445795259Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 11.986033749s" Nov 12 20:45:45.445939 containerd[1586]: time="2024-11-12T20:45:45.445861335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:45:45.472110 containerd[1586]: time="2024-11-12T20:45:45.472055407Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:45:47.758841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999045391.mount: Deactivated successfully. Nov 12 20:45:47.776341 containerd[1586]: time="2024-11-12T20:45:47.776275830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:47.778797 containerd[1586]: time="2024-11-12T20:45:47.778741620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:45:47.781756 containerd[1586]: time="2024-11-12T20:45:47.781713564Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:47.784508 containerd[1586]: time="2024-11-12T20:45:47.784464588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:47.785315 containerd[1586]: time="2024-11-12T20:45:47.785272474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.313165279s" Nov 12 20:45:47.785315 containerd[1586]: time="2024-11-12T20:45:47.785306469Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:45:47.807152 containerd[1586]: time="2024-11-12T20:45:47.807095861Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:45:48.433345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098023795.mount: Deactivated successfully. Nov 12 20:45:50.202253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 12 20:45:50.210799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:51.012219 update_engine[1565]: I20241112 20:45:51.012049 1565 update_attempter.cc:509] Updating boot flags... Nov 12 20:45:51.456518 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2240) Nov 12 20:45:51.461868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:51.468162 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:45:51.512050 kubelet[2251]: E1112 20:45:51.511892 2251 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:45:51.517032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:45:51.517334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:45:52.636032 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2244) Nov 12 20:45:52.667509 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2244) Nov 12 20:45:53.454226 containerd[1586]: time="2024-11-12T20:45:53.454123524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:53.457349 containerd[1586]: time="2024-11-12T20:45:53.457261167Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 20:45:53.460121 containerd[1586]: time="2024-11-12T20:45:53.459997881Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:53.466250 containerd[1586]: time="2024-11-12T20:45:53.466182776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:53.467910 containerd[1586]: time="2024-11-12T20:45:53.467843061Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.660685655s" Nov 12 20:45:53.467910 containerd[1586]: time="2024-11-12T20:45:53.467904447Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:45:56.211320 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:56.222734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:56.244177 systemd[1]: Reloading requested from client PID 2346 ('systemctl') (unit session-7.scope)... Nov 12 20:45:56.244201 systemd[1]: Reloading... Nov 12 20:45:56.326496 zram_generator::config[2388]: No configuration found. Nov 12 20:45:56.702259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:45:56.779178 systemd[1]: Reloading finished in 534 ms. Nov 12 20:45:56.836304 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:45:56.836472 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:45:56.837003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:56.839744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:45:56.995231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:45:57.006976 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:45:57.056310 kubelet[2445]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:45:57.056310 kubelet[2445]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:45:57.056310 kubelet[2445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:45:57.056943 kubelet[2445]: I1112 20:45:57.056353 2445 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:45:57.343585 kubelet[2445]: I1112 20:45:57.343467 2445 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:45:57.343585 kubelet[2445]: I1112 20:45:57.343499 2445 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:45:57.343727 kubelet[2445]: I1112 20:45:57.343715 2445 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:45:57.383254 kubelet[2445]: E1112 20:45:57.383196 2445 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:57.390813 kubelet[2445]: I1112 20:45:57.390755 2445 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:45:57.423309 kubelet[2445]: I1112 20:45:57.423244 2445 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:45:57.423828 kubelet[2445]: I1112 20:45:57.423799 2445 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:45:57.424041 kubelet[2445]: I1112 20:45:57.424013 2445 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:45:57.432834 kubelet[2445]: I1112 20:45:57.432740 2445 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:45:57.432834 kubelet[2445]: I1112 20:45:57.432809 2445 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:45:57.433065 kubelet[2445]: I1112 20:45:57.433031 2445 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:45:57.433256 kubelet[2445]: I1112 20:45:57.433212 2445 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:45:57.433256 kubelet[2445]: I1112 20:45:57.433249 2445 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:45:57.433346 kubelet[2445]: I1112 20:45:57.433308 2445 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:45:57.433381 kubelet[2445]: I1112 20:45:57.433348 2445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:45:57.434089 kubelet[2445]: W1112 20:45:57.434016 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:57.434172 kubelet[2445]: E1112 20:45:57.434109 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:57.434172 kubelet[2445]: W1112 20:45:57.434107 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:57.434172 kubelet[2445]: E1112 20:45:57.434166 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:57.443650 kubelet[2445]: I1112 20:45:57.443614 2445 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:45:57.449941 kubelet[2445]: I1112 20:45:57.449890 2445 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:45:57.450048 kubelet[2445]: W1112 20:45:57.449992 2445 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:45:57.450815 kubelet[2445]: I1112 20:45:57.450794 2445 server.go:1256] "Started kubelet" Nov 12 20:45:57.452218 kubelet[2445]: I1112 20:45:57.450948 2445 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:45:57.452218 kubelet[2445]: I1112 20:45:57.451310 2445 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:45:57.452218 kubelet[2445]: I1112 20:45:57.451382 2445 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:45:57.452218 kubelet[2445]: I1112 20:45:57.452040 2445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:45:57.452580 kubelet[2445]: I1112 20:45:57.452550 2445 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:45:57.453889 kubelet[2445]: E1112 20:45:57.453858 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:45:57.453946 kubelet[2445]: I1112 20:45:57.453914 2445 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:45:57.454245 kubelet[2445]: I1112 20:45:57.454050 2445 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:45:57.454245 kubelet[2445]: I1112 20:45:57.454184 2445 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:45:57.454698 kubelet[2445]: W1112 20:45:57.454639 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:57.454698 kubelet[2445]: E1112 20:45:57.454696 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:57.456792 kubelet[2445]: E1112 20:45:57.456761 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="200ms" Nov 12 20:45:57.457322 kubelet[2445]: E1112 20:45:57.457282 2445 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:45:57.457497 kubelet[2445]: I1112 20:45:57.457474 2445 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:45:57.457605 kubelet[2445]: I1112 20:45:57.457581 2445 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:45:57.458593 kubelet[2445]: I1112 20:45:57.458574 2445 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:45:57.471615 kubelet[2445]: I1112 20:45:57.471569 2445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:45:57.473077 kubelet[2445]: I1112 20:45:57.473039 2445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:45:57.473077 kubelet[2445]: I1112 20:45:57.473082 2445 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:45:57.473170 kubelet[2445]: I1112 20:45:57.473109 2445 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:45:57.473208 kubelet[2445]: E1112 20:45:57.473180 2445 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:45:57.506686 kubelet[2445]: W1112 20:45:57.506621 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:57.506686 kubelet[2445]: E1112 20:45:57.506689 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:57.511418 kubelet[2445]: E1112 20:45:57.511378 2445 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18075378323516a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:45:57.450765989 +0000 UTC m=+0.439121217,LastTimestamp:2024-11-12 20:45:57.450765989 +0000 UTC m=+0.439121217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:45:57.522480 kubelet[2445]: I1112 20:45:57.522408 2445 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:45:57.522480 kubelet[2445]: I1112 20:45:57.522433 2445 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:45:57.522480 kubelet[2445]: I1112 20:45:57.522488 2445 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:45:57.556010 kubelet[2445]: I1112 20:45:57.555962 2445 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:57.556358 kubelet[2445]: E1112 20:45:57.556323 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 12 20:45:57.573811 kubelet[2445]: E1112 20:45:57.573712 2445 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:45:57.658154 kubelet[2445]: E1112 20:45:57.657969 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="400ms" Nov 12 20:45:57.758604 kubelet[2445]: I1112 20:45:57.758553 2445 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:57.759047 kubelet[2445]: E1112 20:45:57.759009 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 12 20:45:57.774093 kubelet[2445]: E1112 20:45:57.774049 2445 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:45:58.059660 kubelet[2445]: E1112 20:45:58.059596 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="800ms" Nov 12 20:45:58.160661 kubelet[2445]: I1112 20:45:58.160613 2445 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:58.161053 kubelet[2445]: E1112 20:45:58.161018 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 12 20:45:58.175186 kubelet[2445]: E1112 20:45:58.175146 2445 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:45:58.547181 kubelet[2445]: I1112 20:45:58.546791 2445 policy_none.go:49] "None policy: Start" Nov 12 20:45:58.548489 kubelet[2445]: I1112 20:45:58.548399 2445 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:45:58.548489 kubelet[2445]: I1112 20:45:58.548442 2445 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:45:58.561779 kubelet[2445]: W1112 20:45:58.561675 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:58.561779 kubelet[2445]: E1112 20:45:58.561779 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:58.564285 kubelet[2445]: W1112 20:45:58.564234 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:58.564369 kubelet[2445]: E1112 20:45:58.564297 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:58.819290 kubelet[2445]: W1112 20:45:58.819107 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:58.819290 kubelet[2445]: E1112 20:45:58.819190 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:58.861158 kubelet[2445]: E1112 20:45:58.861090 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="1.6s" Nov 12 20:45:58.963003 kubelet[2445]: I1112 20:45:58.962956 2445 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:45:58.963425 kubelet[2445]: E1112 20:45:58.963394 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 12 20:45:58.975550 kubelet[2445]: E1112 20:45:58.975509 2445 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:45:58.982153 kubelet[2445]: E1112 20:45:58.982120 2445 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18075378323516a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:45:57.450765989 +0000 UTC m=+0.439121217,LastTimestamp:2024-11-12 20:45:57.450765989 +0000 UTC m=+0.439121217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:45:59.015781 kubelet[2445]: W1112 20:45:59.015679 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:59.015781 kubelet[2445]: E1112 20:45:59.015775 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:59.455337 kubelet[2445]: E1112 20:45:59.455268 2445 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:45:59.779520 kubelet[2445]: I1112 20:45:59.779356 2445 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:45:59.779734 kubelet[2445]: I1112 20:45:59.779709 2445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:45:59.781735 kubelet[2445]: E1112 20:45:59.781718 2445 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:46:00.462825 kubelet[2445]: E1112 20:46:00.462754 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="3.2s" Nov 12 20:46:00.559783 kubelet[2445]: W1112 20:46:00.559699 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:00.559783 kubelet[2445]: E1112 20:46:00.559762 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:00.567743 kubelet[2445]: I1112 20:46:00.567694 2445 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:46:00.568248 kubelet[2445]: E1112 20:46:00.568207 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 12 20:46:00.576597 kubelet[2445]: I1112 20:46:00.576550 2445 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:46:00.580822 kubelet[2445]: I1112 20:46:00.580796 2445 topology_manager.go:215] "Topology Admit Handler" podUID="cabe70d483ba21883d41c59d9ffe34e8" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:46:00.582033 kubelet[2445]: I1112 20:46:00.581975 2445 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:46:00.671239 kubelet[2445]: I1112 20:46:00.671158 2445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:00.671239 kubelet[2445]: I1112 20:46:00.671225 2445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:46:00.671239 kubelet[2445]: I1112 20:46:00.671253 2445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cabe70d483ba21883d41c59d9ffe34e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cabe70d483ba21883d41c59d9ffe34e8\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:46:00.671501 kubelet[2445]: I1112 20:46:00.671280 2445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cabe70d483ba21883d41c59d9ffe34e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cabe70d483ba21883d41c59d9ffe34e8\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:46:00.671501 kubelet[2445]: I1112 20:46:00.671307 2445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cabe70d483ba21883d41c59d9ffe34e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cabe70d483ba21883d41c59d9ffe34e8\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:46:00.671501 kubelet[2445]: I1112 20:46:00.671392 2445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:00.671501 kubelet[2445]: I1112 20:46:00.671476 2445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:00.671501 kubelet[2445]: I1112 20:46:00.671496 2445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:00.671636 kubelet[2445]: I1112 20:46:00.671529 2445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:00.679662 kubelet[2445]: W1112 20:46:00.679637 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:00.679730 kubelet[2445]: E1112 20:46:00.679669 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:00.886971 kubelet[2445]: E1112 20:46:00.886767 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:00.887336 kubelet[2445]: E1112 20:46:00.887181 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:00.887751 containerd[1586]: time="2024-11-12T20:46:00.887700235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cabe70d483ba21883d41c59d9ffe34e8,Namespace:kube-system,Attempt:0,}" Nov 12 20:46:00.888221 containerd[1586]: time="2024-11-12T20:46:00.887723478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 20:46:00.890138 kubelet[2445]: E1112 20:46:00.890102 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:00.890660 containerd[1586]: time="2024-11-12T20:46:00.890615709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 20:46:01.127659 kubelet[2445]: W1112 20:46:01.127567 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:01.127659 kubelet[2445]: E1112 20:46:01.127644 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:01.883931 kubelet[2445]: W1112 20:46:01.883879 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:01.883931 kubelet[2445]: E1112 20:46:01.883930 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:03.208043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046517788.mount: Deactivated successfully. Nov 12 20:46:03.452218 containerd[1586]: time="2024-11-12T20:46:03.452140408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:46:03.526767 containerd[1586]: time="2024-11-12T20:46:03.526584782Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:46:03.569337 containerd[1586]: time="2024-11-12T20:46:03.569233123Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:46:03.623279 containerd[1586]: time="2024-11-12T20:46:03.623199567Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:46:03.656124 containerd[1586]: time="2024-11-12T20:46:03.656008590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:46:03.664214 kubelet[2445]: E1112 20:46:03.664171 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="6.4s" Nov 12 20:46:03.696672 containerd[1586]: time="2024-11-12T20:46:03.696562121Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:46:03.723246 containerd[1586]: time="2024-11-12T20:46:03.723094428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:46:03.770427 kubelet[2445]: I1112 20:46:03.770377 2445 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:46:03.770978 kubelet[2445]: E1112 20:46:03.770930 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 12 20:46:03.781611 containerd[1586]: time="2024-11-12T20:46:03.781478755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:46:03.782653 containerd[1586]: time="2024-11-12T20:46:03.782599097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.891906063s" Nov 12 20:46:03.783393 containerd[1586]: time="2024-11-12T20:46:03.783362086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.895543148s" Nov 12 20:46:03.830043 kubelet[2445]: E1112 20:46:03.829979 2445 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:03.938587 containerd[1586]: time="2024-11-12T20:46:03.938510980Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.050516029s" Nov 12 20:46:04.373018 kubelet[2445]: W1112 20:46:04.372959 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:04.373018 kubelet[2445]: E1112 20:46:04.373014 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:04.390780 kubelet[2445]: W1112 20:46:04.390743 2445 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:04.390780 kubelet[2445]: E1112 20:46:04.390774 2445 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Nov 12 20:46:04.946948 containerd[1586]: time="2024-11-12T20:46:04.946720754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:04.946948 containerd[1586]: time="2024-11-12T20:46:04.946777160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:04.946948 containerd[1586]: time="2024-11-12T20:46:04.946791928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:04.946948 containerd[1586]: time="2024-11-12T20:46:04.946887498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:04.954041 containerd[1586]: time="2024-11-12T20:46:04.953211761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:04.954041 containerd[1586]: time="2024-11-12T20:46:04.953335815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:04.954041 containerd[1586]: time="2024-11-12T20:46:04.953351095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:04.954041 containerd[1586]: time="2024-11-12T20:46:04.953476882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:05.033772 containerd[1586]: time="2024-11-12T20:46:05.033347776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:05.033772 containerd[1586]: time="2024-11-12T20:46:05.033408792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:05.034114 containerd[1586]: time="2024-11-12T20:46:05.033435682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:05.035038 containerd[1586]: time="2024-11-12T20:46:05.034948053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:05.037875 containerd[1586]: time="2024-11-12T20:46:05.037788597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"38213d16c4b83c1856f7f04135d95c50b144dc0e13692322aebae15e48cc7861\"" Nov 12 20:46:05.044677 kubelet[2445]: E1112 20:46:05.044633 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:05.047198 containerd[1586]: time="2024-11-12T20:46:05.046879952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cabe70d483ba21883d41c59d9ffe34e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e19cadc7e5debffecf008a9c29dda863619da9e7c0de8666aa2041c3db8a4dc\"" Nov 12 20:46:05.047494 kubelet[2445]: E1112 20:46:05.047477 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:05.049292 containerd[1586]: time="2024-11-12T20:46:05.049266209Z" level=info msg="CreateContainer within sandbox \"38213d16c4b83c1856f7f04135d95c50b144dc0e13692322aebae15e48cc7861\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:46:05.049652 containerd[1586]: time="2024-11-12T20:46:05.049611941Z" level=info msg="CreateContainer within sandbox \"3e19cadc7e5debffecf008a9c29dda863619da9e7c0de8666aa2041c3db8a4dc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:46:05.109531 containerd[1586]: time="2024-11-12T20:46:05.109475091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4925679b0be4e66305e53718032a1354f95aed3ec931f4cf96a38d9c6c208013\"" Nov 12 20:46:05.110434 kubelet[2445]: E1112 20:46:05.110401 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:05.112205 containerd[1586]: time="2024-11-12T20:46:05.112175910Z" level=info msg="CreateContainer within sandbox \"4925679b0be4e66305e53718032a1354f95aed3ec931f4cf96a38d9c6c208013\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:46:05.184787 containerd[1586]: time="2024-11-12T20:46:05.184717687Z" level=info msg="CreateContainer within sandbox \"3e19cadc7e5debffecf008a9c29dda863619da9e7c0de8666aa2041c3db8a4dc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"089803c9ce1d67320624a972a9cafae210e36b781900c9c01bfdd3a5a745c7f0\"" Nov 12 20:46:05.185613 containerd[1586]: time="2024-11-12T20:46:05.185556757Z" level=info msg="StartContainer for \"089803c9ce1d67320624a972a9cafae210e36b781900c9c01bfdd3a5a745c7f0\"" Nov 12 20:46:05.193542 containerd[1586]: time="2024-11-12T20:46:05.193409739Z" level=info msg="CreateContainer within sandbox \"38213d16c4b83c1856f7f04135d95c50b144dc0e13692322aebae15e48cc7861\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab3426f45e6c3269a06620e64add3192d2a3f7408741a45f2dc2e35c94c6fe88\"" Nov 12 20:46:05.194188 containerd[1586]: time="2024-11-12T20:46:05.194144414Z" level=info msg="StartContainer for \"ab3426f45e6c3269a06620e64add3192d2a3f7408741a45f2dc2e35c94c6fe88\"" Nov 12 20:46:05.200290 containerd[1586]: time="2024-11-12T20:46:05.199364112Z" level=info msg="CreateContainer within sandbox \"4925679b0be4e66305e53718032a1354f95aed3ec931f4cf96a38d9c6c208013\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0f076e9482298dee340bcd1ecc601c3791502029383590b93409ac7e47af862f\"" Nov 12 20:46:05.200290 containerd[1586]: time="2024-11-12T20:46:05.200224984Z" level=info msg="StartContainer for \"0f076e9482298dee340bcd1ecc601c3791502029383590b93409ac7e47af862f\"" Nov 12 20:46:05.305321 containerd[1586]: time="2024-11-12T20:46:05.305114870Z" level=info msg="StartContainer for \"ab3426f45e6c3269a06620e64add3192d2a3f7408741a45f2dc2e35c94c6fe88\" returns successfully" Nov 12 20:46:05.314837 containerd[1586]: time="2024-11-12T20:46:05.314777341Z" level=info msg="StartContainer for \"0f076e9482298dee340bcd1ecc601c3791502029383590b93409ac7e47af862f\" returns successfully" Nov 12 20:46:05.315290 containerd[1586]: time="2024-11-12T20:46:05.315254430Z" level=info msg="StartContainer for \"089803c9ce1d67320624a972a9cafae210e36b781900c9c01bfdd3a5a745c7f0\" returns successfully" Nov 12 20:46:05.529673 kubelet[2445]: E1112 20:46:05.529635 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:05.530420 kubelet[2445]: E1112 20:46:05.530395 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:05.535224 kubelet[2445]: E1112 20:46:05.535202 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:06.541546 kubelet[2445]: E1112 20:46:06.541500 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:06.765181 kubelet[2445]: E1112 20:46:06.765138 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:07.123387 kubelet[2445]: E1112 20:46:07.123326 2445 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:46:07.482638 kubelet[2445]: E1112 20:46:07.482588 2445 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:46:07.962721 kubelet[2445]: E1112 20:46:07.962644 2445 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:46:08.859046 kubelet[2445]: E1112 20:46:08.858994 2445 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:46:09.781938 kubelet[2445]: E1112 20:46:09.781862 2445 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:46:10.114046 kubelet[2445]: E1112 20:46:10.113886 2445 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 20:46:10.173068 kubelet[2445]: I1112 20:46:10.173031 2445 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:46:10.213651 kubelet[2445]: I1112 20:46:10.213599 2445 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:46:10.220551 kubelet[2445]: E1112 20:46:10.220512 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:10.321026 kubelet[2445]: E1112 20:46:10.320958 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:10.421822 kubelet[2445]: E1112 20:46:10.421637 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:10.522632 kubelet[2445]: E1112 20:46:10.522537 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:10.623468 kubelet[2445]: E1112 20:46:10.623404 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:10.627606 kubelet[2445]: E1112 20:46:10.627587 2445 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:10.724669 kubelet[2445]: E1112 20:46:10.724600 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:10.825477 kubelet[2445]: E1112 20:46:10.825385 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:10.926481 kubelet[2445]: E1112 20:46:10.926401 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.027160 kubelet[2445]: E1112 20:46:11.026974 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.127899 kubelet[2445]: E1112 20:46:11.127818 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.228529 kubelet[2445]: E1112 20:46:11.228433 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.329358 kubelet[2445]: E1112 20:46:11.329144 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.430041 kubelet[2445]: E1112 20:46:11.429975 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.531085 kubelet[2445]: E1112 20:46:11.530996 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.558523 systemd[1]: Reloading requested from client PID 2721 ('systemctl') (unit session-7.scope)... Nov 12 20:46:11.558544 systemd[1]: Reloading... Nov 12 20:46:11.632099 kubelet[2445]: E1112 20:46:11.631924 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.656949 zram_generator::config[2763]: No configuration found. Nov 12 20:46:11.732907 kubelet[2445]: E1112 20:46:11.732804 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.798083 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:46:11.833827 kubelet[2445]: E1112 20:46:11.833757 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.898774 systemd[1]: Reloading finished in 339 ms. Nov 12 20:46:11.934576 kubelet[2445]: E1112 20:46:11.934521 2445 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:46:11.943235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:46:11.965942 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:46:11.966639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:46:11.977924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:46:12.178020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:46:12.185835 (kubelet)[2815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:46:12.250102 kubelet[2815]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:46:12.250102 kubelet[2815]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:46:12.250102 kubelet[2815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:46:12.250725 kubelet[2815]: I1112 20:46:12.250148 2815 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:46:12.255461 kubelet[2815]: I1112 20:46:12.255398 2815 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:46:12.255461 kubelet[2815]: I1112 20:46:12.255433 2815 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:46:12.255774 kubelet[2815]: I1112 20:46:12.255750 2815 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:46:12.257339 kubelet[2815]: I1112 20:46:12.257309 2815 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:46:12.266783 kubelet[2815]: I1112 20:46:12.266738 2815 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:46:12.276168 kubelet[2815]: I1112 20:46:12.276128 2815 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:46:12.276806 kubelet[2815]: I1112 20:46:12.276781 2815 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:46:12.276995 kubelet[2815]: I1112 20:46:12.276968 2815 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:46:12.277070 kubelet[2815]: I1112 20:46:12.277004 2815 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:46:12.277070 kubelet[2815]: I1112 20:46:12.277014 2815 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:46:12.277070 kubelet[2815]: I1112 20:46:12.277056 2815 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:46:12.277185 kubelet[2815]: I1112 20:46:12.277167 2815 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:46:12.277185 kubelet[2815]: I1112 20:46:12.277186 2815 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:46:12.277226 kubelet[2815]: I1112 20:46:12.277214 2815 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:46:12.277246 kubelet[2815]: I1112 20:46:12.277229 2815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:46:12.280617 kubelet[2815]: I1112 20:46:12.280588 2815 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:46:12.280828 kubelet[2815]: I1112 20:46:12.280807 2815 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:46:12.281232 kubelet[2815]: I1112 20:46:12.281211 2815 server.go:1256] "Started kubelet" Nov 12 20:46:12.283933 kubelet[2815]: I1112 20:46:12.282407 2815 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:46:12.283933 kubelet[2815]: I1112 20:46:12.283844 2815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:46:12.284500 kubelet[2815]: I1112 20:46:12.284231 2815 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:46:12.284500 kubelet[2815]: I1112 20:46:12.284480 2815 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:46:12.284782 kubelet[2815]: I1112 20:46:12.284762 2815 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:46:12.288723 kubelet[2815]: I1112 20:46:12.288314 2815 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:46:12.288723 kubelet[2815]: I1112 20:46:12.288429 2815 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:46:12.289246 kubelet[2815]: I1112 20:46:12.289222 2815 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:46:12.290353 kubelet[2815]: I1112 20:46:12.290278 2815 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:46:12.290476 kubelet[2815]: I1112 20:46:12.290432 2815 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:46:12.291691 kubelet[2815]: E1112 20:46:12.291660 2815 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:46:12.294772 kubelet[2815]: I1112 20:46:12.294562 2815 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:46:12.297667 kubelet[2815]: I1112 20:46:12.297642 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:46:12.299091 kubelet[2815]: I1112 20:46:12.299060 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:46:12.299137 kubelet[2815]: I1112 20:46:12.299101 2815 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:46:12.299137 kubelet[2815]: I1112 20:46:12.299129 2815 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:46:12.299207 kubelet[2815]: E1112 20:46:12.299190 2815 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:46:12.345842 kubelet[2815]: I1112 20:46:12.345803 2815 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:46:12.345842 kubelet[2815]: I1112 20:46:12.345826 2815 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:46:12.345842 kubelet[2815]: I1112 20:46:12.345844 2815 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:46:12.346057 kubelet[2815]: I1112 20:46:12.345995 2815 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:46:12.346057 kubelet[2815]: I1112 20:46:12.346016 2815 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:46:12.346057 kubelet[2815]: I1112 20:46:12.346023 2815 policy_none.go:49] "None policy: Start" Nov 12 20:46:12.346951 kubelet[2815]: I1112 20:46:12.346907 2815 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:46:12.346993 kubelet[2815]: I1112 20:46:12.346962 2815 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:46:12.347381 kubelet[2815]: I1112 20:46:12.347352 2815 state_mem.go:75] "Updated machine memory state" Nov 12 20:46:12.350155 kubelet[2815]: I1112 20:46:12.349507 2815 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:46:12.350155 kubelet[2815]: I1112 20:46:12.350065 2815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:46:12.400238 kubelet[2815]: I1112 20:46:12.400165 2815 topology_manager.go:215] "Topology Admit Handler" podUID="cabe70d483ba21883d41c59d9ffe34e8" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:46:12.400396 kubelet[2815]: I1112 20:46:12.400347 2815 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:46:12.400493 kubelet[2815]: I1112 20:46:12.400401 2815 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:46:12.456635 kubelet[2815]: I1112 20:46:12.456605 2815 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:46:12.538904 kubelet[2815]: I1112 20:46:12.538844 2815 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 20:46:12.539087 kubelet[2815]: I1112 20:46:12.538992 2815 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:46:12.592238 kubelet[2815]: I1112 20:46:12.592152 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cabe70d483ba21883d41c59d9ffe34e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cabe70d483ba21883d41c59d9ffe34e8\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:46:12.592238 kubelet[2815]: I1112 20:46:12.592230 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:12.592543 kubelet[2815]: I1112 20:46:12.592371 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:12.592543 kubelet[2815]: I1112 20:46:12.592471 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:12.592543 kubelet[2815]: I1112 20:46:12.592508 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:46:12.592638 kubelet[2815]: I1112 20:46:12.592555 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cabe70d483ba21883d41c59d9ffe34e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cabe70d483ba21883d41c59d9ffe34e8\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:46:12.592638 kubelet[2815]: I1112 20:46:12.592583 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cabe70d483ba21883d41c59d9ffe34e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cabe70d483ba21883d41c59d9ffe34e8\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:46:12.592638 kubelet[2815]: I1112 20:46:12.592632 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:12.592746 kubelet[2815]: I1112 20:46:12.592693 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:46:12.715478 kubelet[2815]: E1112 20:46:12.715258 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:12.715637 kubelet[2815]: E1112 20:46:12.715578 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:12.716501 kubelet[2815]: E1112 20:46:12.716354 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:13.278034 kubelet[2815]: I1112 20:46:13.277965 2815 apiserver.go:52] "Watching apiserver" Nov 12 20:46:13.289760 kubelet[2815]: I1112 20:46:13.289663 2815 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:46:13.318483 kubelet[2815]: E1112 20:46:13.316670 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:13.321010 kubelet[2815]: E1112 20:46:13.320070 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:13.327488 kubelet[2815]: E1112 20:46:13.324875 2815 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 20:46:13.327488 kubelet[2815]: E1112 20:46:13.325535 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:13.355818 kubelet[2815]: I1112 20:46:13.355761 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.355711244 podStartE2EDuration="1.355711244s" podCreationTimestamp="2024-11-12 20:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:13.355522599 +0000 UTC m=+1.163391780" watchObservedRunningTime="2024-11-12 20:46:13.355711244 +0000 UTC m=+1.163580425" Nov 12 20:46:13.368975 kubelet[2815]: I1112 20:46:13.368915 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.368874422 podStartE2EDuration="1.368874422s" podCreationTimestamp="2024-11-12 20:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:13.368707548 +0000 UTC m=+1.176576719" watchObservedRunningTime="2024-11-12 20:46:13.368874422 +0000 UTC m=+1.176743603" Nov 12 20:46:13.380481 kubelet[2815]: I1112 20:46:13.380412 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3803545320000001 podStartE2EDuration="1.380354532s" podCreationTimestamp="2024-11-12 20:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:13.378690081 +0000 UTC m=+1.186559272" watchObservedRunningTime="2024-11-12 20:46:13.380354532 +0000 UTC m=+1.188223713" Nov 12 20:46:14.324483 kubelet[2815]: E1112 20:46:14.321331 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:17.336286 sudo[1785]: pam_unix(sudo:session): session closed for user root Nov 12 20:46:17.349969 sshd[1778]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:17.357252 systemd[1]: sshd@6-10.0.0.56:22-10.0.0.1:49866.service: Deactivated successfully. Nov 12 20:46:17.361031 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:46:17.361912 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:46:17.363226 systemd-logind[1564]: Removed session 7. Nov 12 20:46:18.964689 kubelet[2815]: E1112 20:46:18.964647 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:19.327977 kubelet[2815]: E1112 20:46:19.327779 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:19.776647 kubelet[2815]: E1112 20:46:19.776591 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:20.328970 kubelet[2815]: E1112 20:46:20.328930 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:22.710660 kubelet[2815]: E1112 20:46:22.710592 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:23.270786 kubelet[2815]: I1112 20:46:23.270742 2815 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:46:23.271079 containerd[1586]: time="2024-11-12T20:46:23.271043590Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:46:23.271504 kubelet[2815]: I1112 20:46:23.271231 2815 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:46:23.334762 kubelet[2815]: E1112 20:46:23.334734 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:24.057414 kubelet[2815]: I1112 20:46:24.057365 2815 topology_manager.go:215] "Topology Admit Handler" podUID="2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c" podNamespace="kube-system" podName="kube-proxy-2nlff" Nov 12 20:46:24.158975 kubelet[2815]: I1112 20:46:24.158889 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c-kube-proxy\") pod \"kube-proxy-2nlff\" (UID: \"2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c\") " pod="kube-system/kube-proxy-2nlff" Nov 12 20:46:24.158975 kubelet[2815]: I1112 20:46:24.158990 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c-lib-modules\") pod \"kube-proxy-2nlff\" (UID: \"2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c\") " pod="kube-system/kube-proxy-2nlff" Nov 12 20:46:24.159224 kubelet[2815]: I1112 20:46:24.159041 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c-xtables-lock\") pod \"kube-proxy-2nlff\" (UID: \"2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c\") " pod="kube-system/kube-proxy-2nlff" Nov 12 20:46:24.159224 kubelet[2815]: I1112 20:46:24.159083 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475sb\" (UniqueName: \"kubernetes.io/projected/2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c-kube-api-access-475sb\") pod \"kube-proxy-2nlff\" (UID: \"2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c\") " pod="kube-system/kube-proxy-2nlff" Nov 12 20:46:24.192359 kubelet[2815]: I1112 20:46:24.191953 2815 topology_manager.go:215] "Topology Admit Handler" podUID="b3783601-016d-4585-9c1e-3477077f8bce" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-ffbjk" Nov 12 20:46:24.260451 kubelet[2815]: I1112 20:46:24.260299 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b3783601-016d-4585-9c1e-3477077f8bce-var-lib-calico\") pod \"tigera-operator-56b74f76df-ffbjk\" (UID: \"b3783601-016d-4585-9c1e-3477077f8bce\") " pod="tigera-operator/tigera-operator-56b74f76df-ffbjk" Nov 12 20:46:24.260638 kubelet[2815]: I1112 20:46:24.260559 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29b9\" (UniqueName: \"kubernetes.io/projected/b3783601-016d-4585-9c1e-3477077f8bce-kube-api-access-k29b9\") pod \"tigera-operator-56b74f76df-ffbjk\" (UID: \"b3783601-016d-4585-9c1e-3477077f8bce\") " pod="tigera-operator/tigera-operator-56b74f76df-ffbjk" Nov 12 20:46:24.367918 kubelet[2815]: E1112 20:46:24.367770 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:24.369046 containerd[1586]: time="2024-11-12T20:46:24.368706179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nlff,Uid:2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c,Namespace:kube-system,Attempt:0,}" Nov 12 20:46:24.498991 containerd[1586]: time="2024-11-12T20:46:24.498889250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-ffbjk,Uid:b3783601-016d-4585-9c1e-3477077f8bce,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:46:24.746366 containerd[1586]: time="2024-11-12T20:46:24.745986098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:24.746366 containerd[1586]: time="2024-11-12T20:46:24.746044858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:24.746366 containerd[1586]: time="2024-11-12T20:46:24.746078822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:24.746366 containerd[1586]: time="2024-11-12T20:46:24.746239354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:24.763126 containerd[1586]: time="2024-11-12T20:46:24.762710405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:24.763126 containerd[1586]: time="2024-11-12T20:46:24.762764808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:24.763126 containerd[1586]: time="2024-11-12T20:46:24.762774997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:24.763126 containerd[1586]: time="2024-11-12T20:46:24.763035426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:24.798762 containerd[1586]: time="2024-11-12T20:46:24.798683233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nlff,Uid:2472d4bd-29d3-4bc6-ad13-4bbdf9fda72c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e424692c97b9346094c97c0a91136550f86722c76eefce413d6290c6bee9c93f\"" Nov 12 20:46:24.799712 kubelet[2815]: E1112 20:46:24.799688 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:24.802568 containerd[1586]: time="2024-11-12T20:46:24.802529060Z" level=info msg="CreateContainer within sandbox \"e424692c97b9346094c97c0a91136550f86722c76eefce413d6290c6bee9c93f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:46:24.822147 containerd[1586]: time="2024-11-12T20:46:24.822107961Z" level=info msg="CreateContainer within sandbox \"e424692c97b9346094c97c0a91136550f86722c76eefce413d6290c6bee9c93f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91d92afab9b9faf6d7efe7dfcaa8f0791b998af5338d30e7bc15bf85e7d61a31\"" Nov 12 20:46:24.828480 containerd[1586]: time="2024-11-12T20:46:24.826758590Z" level=info msg="StartContainer for \"91d92afab9b9faf6d7efe7dfcaa8f0791b998af5338d30e7bc15bf85e7d61a31\"" Nov 12 20:46:24.832700 containerd[1586]: time="2024-11-12T20:46:24.832655782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-ffbjk,Uid:b3783601-016d-4585-9c1e-3477077f8bce,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9b3497d1f1b2d60c5e25fc473fa73d9866c13f7027dcfd04c53d2802e88453ca\"" Nov 12 20:46:24.834375 containerd[1586]: time="2024-11-12T20:46:24.834337833Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:46:24.898583 containerd[1586]: time="2024-11-12T20:46:24.898536881Z" level=info msg="StartContainer for \"91d92afab9b9faf6d7efe7dfcaa8f0791b998af5338d30e7bc15bf85e7d61a31\" returns successfully" Nov 12 20:46:25.340495 kubelet[2815]: E1112 20:46:25.340419 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:25.419350 kubelet[2815]: I1112 20:46:25.419224 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2nlff" podStartSLOduration=1.4191335999999999 podStartE2EDuration="1.4191336s" podCreationTimestamp="2024-11-12 20:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:25.41879844 +0000 UTC m=+13.226667641" watchObservedRunningTime="2024-11-12 20:46:25.4191336 +0000 UTC m=+13.227002791" Nov 12 20:46:27.042354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161772888.mount: Deactivated successfully. Nov 12 20:46:27.430847 containerd[1586]: time="2024-11-12T20:46:27.430660901Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:27.432141 containerd[1586]: time="2024-11-12T20:46:27.432072082Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763343" Nov 12 20:46:27.433363 containerd[1586]: time="2024-11-12T20:46:27.433288177Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:27.435565 containerd[1586]: time="2024-11-12T20:46:27.435519078Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:27.436278 containerd[1586]: time="2024-11-12T20:46:27.436227890Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 2.601853068s" Nov 12 20:46:27.436278 containerd[1586]: time="2024-11-12T20:46:27.436268025Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:46:27.455134 containerd[1586]: time="2024-11-12T20:46:27.455061733Z" level=info msg="CreateContainer within sandbox \"9b3497d1f1b2d60c5e25fc473fa73d9866c13f7027dcfd04c53d2802e88453ca\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:46:27.472002 containerd[1586]: time="2024-11-12T20:46:27.471941395Z" level=info msg="CreateContainer within sandbox \"9b3497d1f1b2d60c5e25fc473fa73d9866c13f7027dcfd04c53d2802e88453ca\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ef461f6daf8eed3a00a18a42861552f390160bc20e012038d72262043f97ff05\"" Nov 12 20:46:27.472834 containerd[1586]: time="2024-11-12T20:46:27.472800419Z" level=info msg="StartContainer for \"ef461f6daf8eed3a00a18a42861552f390160bc20e012038d72262043f97ff05\"" Nov 12 20:46:27.946748 containerd[1586]: time="2024-11-12T20:46:27.946569499Z" level=info msg="StartContainer for \"ef461f6daf8eed3a00a18a42861552f390160bc20e012038d72262043f97ff05\" returns successfully" Nov 12 20:46:30.532590 kubelet[2815]: I1112 20:46:30.529095 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-ffbjk" podStartSLOduration=3.924364121 podStartE2EDuration="6.529009334s" podCreationTimestamp="2024-11-12 20:46:24 +0000 UTC" firstStartedPulling="2024-11-12 20:46:24.833864563 +0000 UTC m=+12.641733744" lastFinishedPulling="2024-11-12 20:46:27.438509776 +0000 UTC m=+15.246378957" observedRunningTime="2024-11-12 20:46:28.361863606 +0000 UTC m=+16.169732787" watchObservedRunningTime="2024-11-12 20:46:30.529009334 +0000 UTC m=+18.336878515" Nov 12 20:46:30.532590 kubelet[2815]: I1112 20:46:30.529249 2815 topology_manager.go:215] "Topology Admit Handler" podUID="ec53b4d1-c46e-4dc3-8bfa-58910c3088a9" podNamespace="calico-system" podName="calico-typha-65756fff97-t57gj" Nov 12 20:46:30.580550 kubelet[2815]: I1112 20:46:30.580501 2815 topology_manager.go:215] "Topology Admit Handler" podUID="f03973c3-f8ef-463b-8b3e-3a1648b70ae8" podNamespace="calico-system" podName="calico-node-b4n8t" Nov 12 20:46:30.692617 kubelet[2815]: I1112 20:46:30.692573 2815 topology_manager.go:215] "Topology Admit Handler" podUID="71ac2d0f-163c-4690-9604-80f6d13fee6e" podNamespace="calico-system" podName="csi-node-driver-pn8fl" Nov 12 20:46:30.692964 kubelet[2815]: E1112 20:46:30.692910 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pn8fl" podUID="71ac2d0f-163c-4690-9604-80f6d13fee6e" Nov 12 20:46:30.704202 kubelet[2815]: I1112 20:46:30.704145 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ec53b4d1-c46e-4dc3-8bfa-58910c3088a9-typha-certs\") pod \"calico-typha-65756fff97-t57gj\" (UID: \"ec53b4d1-c46e-4dc3-8bfa-58910c3088a9\") " pod="calico-system/calico-typha-65756fff97-t57gj" Nov 12 20:46:30.704202 kubelet[2815]: I1112 20:46:30.704209 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-tigera-ca-bundle\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704434 kubelet[2815]: I1112 20:46:30.704244 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wzr\" (UniqueName: \"kubernetes.io/projected/ec53b4d1-c46e-4dc3-8bfa-58910c3088a9-kube-api-access-c2wzr\") pod \"calico-typha-65756fff97-t57gj\" (UID: \"ec53b4d1-c46e-4dc3-8bfa-58910c3088a9\") " pod="calico-system/calico-typha-65756fff97-t57gj" Nov 12 20:46:30.704434 kubelet[2815]: I1112 20:46:30.704274 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-var-run-calico\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704434 kubelet[2815]: I1112 20:46:30.704304 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-cni-log-dir\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704434 kubelet[2815]: I1112 20:46:30.704335 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-var-lib-calico\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704434 kubelet[2815]: I1112 20:46:30.704364 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-lib-modules\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704625 kubelet[2815]: I1112 20:46:30.704391 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-policysync\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704625 kubelet[2815]: I1112 20:46:30.704423 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-cni-bin-dir\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704625 kubelet[2815]: I1112 20:46:30.704467 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-xtables-lock\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704625 kubelet[2815]: I1112 20:46:30.704499 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-cni-net-dir\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704625 kubelet[2815]: I1112 20:46:30.704527 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-flexvol-driver-host\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704783 kubelet[2815]: I1112 20:46:30.704552 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc9hw\" (UniqueName: \"kubernetes.io/projected/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-kube-api-access-hc9hw\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.704783 kubelet[2815]: I1112 20:46:30.704580 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec53b4d1-c46e-4dc3-8bfa-58910c3088a9-tigera-ca-bundle\") pod \"calico-typha-65756fff97-t57gj\" (UID: \"ec53b4d1-c46e-4dc3-8bfa-58910c3088a9\") " pod="calico-system/calico-typha-65756fff97-t57gj" Nov 12 20:46:30.704783 kubelet[2815]: I1112 20:46:30.704603 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f03973c3-f8ef-463b-8b3e-3a1648b70ae8-node-certs\") pod \"calico-node-b4n8t\" (UID: \"f03973c3-f8ef-463b-8b3e-3a1648b70ae8\") " pod="calico-system/calico-node-b4n8t" Nov 12 20:46:30.805566 kubelet[2815]: I1112 20:46:30.805069 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71ac2d0f-163c-4690-9604-80f6d13fee6e-socket-dir\") pod \"csi-node-driver-pn8fl\" (UID: \"71ac2d0f-163c-4690-9604-80f6d13fee6e\") " pod="calico-system/csi-node-driver-pn8fl" Nov 12 20:46:30.805566 kubelet[2815]: I1112 20:46:30.805191 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71ac2d0f-163c-4690-9604-80f6d13fee6e-varrun\") pod \"csi-node-driver-pn8fl\" (UID: \"71ac2d0f-163c-4690-9604-80f6d13fee6e\") " pod="calico-system/csi-node-driver-pn8fl" Nov 12 20:46:30.805566 kubelet[2815]: I1112 20:46:30.805313 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71ac2d0f-163c-4690-9604-80f6d13fee6e-registration-dir\") pod \"csi-node-driver-pn8fl\" (UID: \"71ac2d0f-163c-4690-9604-80f6d13fee6e\") " pod="calico-system/csi-node-driver-pn8fl" Nov 12 20:46:30.805566 kubelet[2815]: I1112 20:46:30.805486 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71ac2d0f-163c-4690-9604-80f6d13fee6e-kubelet-dir\") pod \"csi-node-driver-pn8fl\" (UID: \"71ac2d0f-163c-4690-9604-80f6d13fee6e\") " pod="calico-system/csi-node-driver-pn8fl" Nov 12 20:46:30.806344 kubelet[2815]: I1112 20:46:30.805576 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gljsn\" (UniqueName: \"kubernetes.io/projected/71ac2d0f-163c-4690-9604-80f6d13fee6e-kube-api-access-gljsn\") pod \"csi-node-driver-pn8fl\" (UID: \"71ac2d0f-163c-4690-9604-80f6d13fee6e\") " pod="calico-system/csi-node-driver-pn8fl" Nov 12 20:46:30.814296 kubelet[2815]: E1112 20:46:30.814253 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.814296 kubelet[2815]: W1112 20:46:30.814285 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.814444 kubelet[2815]: E1112 20:46:30.814339 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.815683 kubelet[2815]: E1112 20:46:30.815663 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.815683 kubelet[2815]: W1112 20:46:30.815679 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.815820 kubelet[2815]: E1112 20:46:30.815706 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.815956 kubelet[2815]: E1112 20:46:30.815940 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.815992 kubelet[2815]: W1112 20:46:30.815956 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.815992 kubelet[2815]: E1112 20:46:30.815971 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.817538 kubelet[2815]: E1112 20:46:30.817102 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.817538 kubelet[2815]: W1112 20:46:30.817124 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.817538 kubelet[2815]: E1112 20:46:30.817152 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.841316 kubelet[2815]: E1112 20:46:30.841278 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:30.842602 containerd[1586]: time="2024-11-12T20:46:30.842544873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65756fff97-t57gj,Uid:ec53b4d1-c46e-4dc3-8bfa-58910c3088a9,Namespace:calico-system,Attempt:0,}" Nov 12 20:46:30.871075 containerd[1586]: time="2024-11-12T20:46:30.870875057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:30.871075 containerd[1586]: time="2024-11-12T20:46:30.870950249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:30.871075 containerd[1586]: time="2024-11-12T20:46:30.870966720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:30.871299 containerd[1586]: time="2024-11-12T20:46:30.871174080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:30.887872 kubelet[2815]: E1112 20:46:30.887823 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:30.888795 containerd[1586]: time="2024-11-12T20:46:30.888748882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b4n8t,Uid:f03973c3-f8ef-463b-8b3e-3a1648b70ae8,Namespace:calico-system,Attempt:0,}" Nov 12 20:46:30.907121 kubelet[2815]: E1112 20:46:30.907085 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.907121 kubelet[2815]: W1112 20:46:30.907106 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.907121 kubelet[2815]: E1112 20:46:30.907131 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.907476 kubelet[2815]: E1112 20:46:30.907439 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.907476 kubelet[2815]: W1112 20:46:30.907462 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.907476 kubelet[2815]: E1112 20:46:30.907482 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.907881 kubelet[2815]: E1112 20:46:30.907836 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.907881 kubelet[2815]: W1112 20:46:30.907863 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.907989 kubelet[2815]: E1112 20:46:30.907908 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.908377 kubelet[2815]: E1112 20:46:30.908351 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.908377 kubelet[2815]: W1112 20:46:30.908365 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.908434 kubelet[2815]: E1112 20:46:30.908382 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.908654 kubelet[2815]: E1112 20:46:30.908636 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.908654 kubelet[2815]: W1112 20:46:30.908648 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.908727 kubelet[2815]: E1112 20:46:30.908665 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.909011 kubelet[2815]: E1112 20:46:30.908993 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.909011 kubelet[2815]: W1112 20:46:30.909006 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.909074 kubelet[2815]: E1112 20:46:30.909024 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.909285 kubelet[2815]: E1112 20:46:30.909267 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.909285 kubelet[2815]: W1112 20:46:30.909280 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.909367 kubelet[2815]: E1112 20:46:30.909329 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.909646 kubelet[2815]: E1112 20:46:30.909627 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.909646 kubelet[2815]: W1112 20:46:30.909641 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.909756 kubelet[2815]: E1112 20:46:30.909736 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.910060 kubelet[2815]: E1112 20:46:30.910019 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.910060 kubelet[2815]: W1112 20:46:30.910035 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.910060 kubelet[2815]: E1112 20:46:30.910060 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.910415 kubelet[2815]: E1112 20:46:30.910382 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.910415 kubelet[2815]: W1112 20:46:30.910399 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.910528 kubelet[2815]: E1112 20:46:30.910421 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.910753 kubelet[2815]: E1112 20:46:30.910723 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.910753 kubelet[2815]: W1112 20:46:30.910738 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.910852 kubelet[2815]: E1112 20:46:30.910794 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.911145 kubelet[2815]: E1112 20:46:30.911121 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.911145 kubelet[2815]: W1112 20:46:30.911145 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.911258 kubelet[2815]: E1112 20:46:30.911235 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.911408 kubelet[2815]: E1112 20:46:30.911389 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.911408 kubelet[2815]: W1112 20:46:30.911401 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.911549 kubelet[2815]: E1112 20:46:30.911530 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.911689 kubelet[2815]: E1112 20:46:30.911671 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.911689 kubelet[2815]: W1112 20:46:30.911683 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.911901 kubelet[2815]: E1112 20:46:30.911767 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.911959 kubelet[2815]: E1112 20:46:30.911934 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.911959 kubelet[2815]: W1112 20:46:30.911942 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.911959 kubelet[2815]: E1112 20:46:30.911956 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.912212 kubelet[2815]: E1112 20:46:30.912191 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.912212 kubelet[2815]: W1112 20:46:30.912204 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.912290 kubelet[2815]: E1112 20:46:30.912229 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.912523 kubelet[2815]: E1112 20:46:30.912501 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.912523 kubelet[2815]: W1112 20:46:30.912515 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.912635 kubelet[2815]: E1112 20:46:30.912537 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.912836 kubelet[2815]: E1112 20:46:30.912815 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.912836 kubelet[2815]: W1112 20:46:30.912828 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.912943 kubelet[2815]: E1112 20:46:30.912924 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.913172 kubelet[2815]: E1112 20:46:30.913153 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.913172 kubelet[2815]: W1112 20:46:30.913164 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.913261 kubelet[2815]: E1112 20:46:30.913209 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.913525 kubelet[2815]: E1112 20:46:30.913509 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.913525 kubelet[2815]: W1112 20:46:30.913520 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.913623 kubelet[2815]: E1112 20:46:30.913615 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.913826 kubelet[2815]: E1112 20:46:30.913805 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.913826 kubelet[2815]: W1112 20:46:30.913818 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.913826 kubelet[2815]: E1112 20:46:30.913832 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.914140 kubelet[2815]: E1112 20:46:30.914114 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.914187 kubelet[2815]: W1112 20:46:30.914153 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.914187 kubelet[2815]: E1112 20:46:30.914176 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.915776 kubelet[2815]: E1112 20:46:30.915732 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.915776 kubelet[2815]: W1112 20:46:30.915748 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.915776 kubelet[2815]: E1112 20:46:30.915774 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.917710 kubelet[2815]: E1112 20:46:30.917685 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.917710 kubelet[2815]: W1112 20:46:30.917707 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.917821 kubelet[2815]: E1112 20:46:30.917801 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.918136 kubelet[2815]: E1112 20:46:30.918116 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.918136 kubelet[2815]: W1112 20:46:30.918134 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.918250 kubelet[2815]: E1112 20:46:30.918155 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.921004 containerd[1586]: time="2024-11-12T20:46:30.920733401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:30.921070 containerd[1586]: time="2024-11-12T20:46:30.920874866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:30.921070 containerd[1586]: time="2024-11-12T20:46:30.920889874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:30.921070 containerd[1586]: time="2024-11-12T20:46:30.921012024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:30.925429 kubelet[2815]: E1112 20:46:30.925399 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:30.925429 kubelet[2815]: W1112 20:46:30.925429 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:30.925559 kubelet[2815]: E1112 20:46:30.925478 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:30.945513 containerd[1586]: time="2024-11-12T20:46:30.944496617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65756fff97-t57gj,Uid:ec53b4d1-c46e-4dc3-8bfa-58910c3088a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"66d2b2e8d728a3fa321ec58444c95922afefee748f0625307ecb46a3a974b3aa\"" Nov 12 20:46:30.946609 kubelet[2815]: E1112 20:46:30.946542 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:30.948034 containerd[1586]: time="2024-11-12T20:46:30.947996651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:46:30.972551 containerd[1586]: time="2024-11-12T20:46:30.972491041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b4n8t,Uid:f03973c3-f8ef-463b-8b3e-3a1648b70ae8,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ed3d844970727db1b2ee2bc7fb694b5058f30c852ab0aad1690272df1bea71d\"" Nov 12 20:46:30.973376 kubelet[2815]: E1112 20:46:30.973352 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:32.305555 kubelet[2815]: E1112 20:46:32.305431 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pn8fl" podUID="71ac2d0f-163c-4690-9604-80f6d13fee6e" Nov 12 20:46:32.954439 containerd[1586]: time="2024-11-12T20:46:32.954370638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:32.955790 containerd[1586]: time="2024-11-12T20:46:32.955716666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:46:32.957106 containerd[1586]: time="2024-11-12T20:46:32.957062975Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:32.959225 containerd[1586]: time="2024-11-12T20:46:32.959171345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:32.959839 containerd[1586]: time="2024-11-12T20:46:32.959803102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.011766956s" Nov 12 20:46:32.959905 containerd[1586]: time="2024-11-12T20:46:32.959839340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:46:32.960587 containerd[1586]: time="2024-11-12T20:46:32.960552369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:46:32.968421 containerd[1586]: time="2024-11-12T20:46:32.968381163Z" level=info msg="CreateContainer within sandbox \"66d2b2e8d728a3fa321ec58444c95922afefee748f0625307ecb46a3a974b3aa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:46:32.986918 containerd[1586]: time="2024-11-12T20:46:32.986856593Z" level=info msg="CreateContainer within sandbox \"66d2b2e8d728a3fa321ec58444c95922afefee748f0625307ecb46a3a974b3aa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8062067645536a197091f778105f2fa047a1ae57c31e10bab0e69cbc4a4c2cf9\"" Nov 12 20:46:32.987226 containerd[1586]: time="2024-11-12T20:46:32.987194077Z" level=info msg="StartContainer for \"8062067645536a197091f778105f2fa047a1ae57c31e10bab0e69cbc4a4c2cf9\"" Nov 12 20:46:33.061294 containerd[1586]: time="2024-11-12T20:46:33.061238501Z" level=info msg="StartContainer for \"8062067645536a197091f778105f2fa047a1ae57c31e10bab0e69cbc4a4c2cf9\" returns successfully" Nov 12 20:46:33.374716 kubelet[2815]: E1112 20:46:33.374513 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:33.385091 kubelet[2815]: I1112 20:46:33.385005 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-65756fff97-t57gj" podStartSLOduration=1.372350174 podStartE2EDuration="3.3849483s" podCreationTimestamp="2024-11-12 20:46:30 +0000 UTC" firstStartedPulling="2024-11-12 20:46:30.947650852 +0000 UTC m=+18.755520033" lastFinishedPulling="2024-11-12 20:46:32.960248968 +0000 UTC m=+20.768118159" observedRunningTime="2024-11-12 20:46:33.384073738 +0000 UTC m=+21.191942919" watchObservedRunningTime="2024-11-12 20:46:33.3849483 +0000 UTC m=+21.192817481" Nov 12 20:46:33.425843 kubelet[2815]: E1112 20:46:33.425812 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.425843 kubelet[2815]: W1112 20:46:33.425832 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.425843 kubelet[2815]: E1112 20:46:33.425862 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.426094 kubelet[2815]: E1112 20:46:33.426084 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.426094 kubelet[2815]: W1112 20:46:33.426093 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.426168 kubelet[2815]: E1112 20:46:33.426104 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.426349 kubelet[2815]: E1112 20:46:33.426333 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.426349 kubelet[2815]: W1112 20:46:33.426346 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.426421 kubelet[2815]: E1112 20:46:33.426362 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.426737 kubelet[2815]: E1112 20:46:33.426705 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.426737 kubelet[2815]: W1112 20:46:33.426719 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.426737 kubelet[2815]: E1112 20:46:33.426732 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.427034 kubelet[2815]: E1112 20:46:33.427019 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.427034 kubelet[2815]: W1112 20:46:33.427031 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.427099 kubelet[2815]: E1112 20:46:33.427043 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.427242 kubelet[2815]: E1112 20:46:33.427227 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.427242 kubelet[2815]: W1112 20:46:33.427238 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.427307 kubelet[2815]: E1112 20:46:33.427249 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.427433 kubelet[2815]: E1112 20:46:33.427418 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.427433 kubelet[2815]: W1112 20:46:33.427431 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.427510 kubelet[2815]: E1112 20:46:33.427442 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.427645 kubelet[2815]: E1112 20:46:33.427630 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.427645 kubelet[2815]: W1112 20:46:33.427641 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.427695 kubelet[2815]: E1112 20:46:33.427651 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.427872 kubelet[2815]: E1112 20:46:33.427857 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.427872 kubelet[2815]: W1112 20:46:33.427868 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.427924 kubelet[2815]: E1112 20:46:33.427879 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.428050 kubelet[2815]: E1112 20:46:33.428036 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.428050 kubelet[2815]: W1112 20:46:33.428046 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.428107 kubelet[2815]: E1112 20:46:33.428057 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.428236 kubelet[2815]: E1112 20:46:33.428223 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.428236 kubelet[2815]: W1112 20:46:33.428233 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.428288 kubelet[2815]: E1112 20:46:33.428245 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.428428 kubelet[2815]: E1112 20:46:33.428414 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.428428 kubelet[2815]: W1112 20:46:33.428425 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.428503 kubelet[2815]: E1112 20:46:33.428437 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.428692 kubelet[2815]: E1112 20:46:33.428678 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.428692 kubelet[2815]: W1112 20:46:33.428689 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.428745 kubelet[2815]: E1112 20:46:33.428700 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.428908 kubelet[2815]: E1112 20:46:33.428892 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.428908 kubelet[2815]: W1112 20:46:33.428904 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.428965 kubelet[2815]: E1112 20:46:33.428914 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.429091 kubelet[2815]: E1112 20:46:33.429077 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.429091 kubelet[2815]: W1112 20:46:33.429088 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.429150 kubelet[2815]: E1112 20:46:33.429098 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.429329 kubelet[2815]: E1112 20:46:33.429314 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.429329 kubelet[2815]: W1112 20:46:33.429325 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.429384 kubelet[2815]: E1112 20:46:33.429336 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.429600 kubelet[2815]: E1112 20:46:33.429585 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.429600 kubelet[2815]: W1112 20:46:33.429596 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.429668 kubelet[2815]: E1112 20:46:33.429611 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.429856 kubelet[2815]: E1112 20:46:33.429834 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.429856 kubelet[2815]: W1112 20:46:33.429845 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.429915 kubelet[2815]: E1112 20:46:33.429869 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.430102 kubelet[2815]: E1112 20:46:33.430085 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.430102 kubelet[2815]: W1112 20:46:33.430098 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.430166 kubelet[2815]: E1112 20:46:33.430118 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.430354 kubelet[2815]: E1112 20:46:33.430338 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.430354 kubelet[2815]: W1112 20:46:33.430352 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.430395 kubelet[2815]: E1112 20:46:33.430369 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.430593 kubelet[2815]: E1112 20:46:33.430579 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.430593 kubelet[2815]: W1112 20:46:33.430590 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.430662 kubelet[2815]: E1112 20:46:33.430605 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.430839 kubelet[2815]: E1112 20:46:33.430825 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.430839 kubelet[2815]: W1112 20:46:33.430835 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.430892 kubelet[2815]: E1112 20:46:33.430857 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.431097 kubelet[2815]: E1112 20:46:33.431072 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.431097 kubelet[2815]: W1112 20:46:33.431087 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.431155 kubelet[2815]: E1112 20:46:33.431103 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.431326 kubelet[2815]: E1112 20:46:33.431310 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.431326 kubelet[2815]: W1112 20:46:33.431321 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.431384 kubelet[2815]: E1112 20:46:33.431347 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.431548 kubelet[2815]: E1112 20:46:33.431534 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.431548 kubelet[2815]: W1112 20:46:33.431545 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.431606 kubelet[2815]: E1112 20:46:33.431571 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.431723 kubelet[2815]: E1112 20:46:33.431709 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.431723 kubelet[2815]: W1112 20:46:33.431720 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.431770 kubelet[2815]: E1112 20:46:33.431733 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.431925 kubelet[2815]: E1112 20:46:33.431911 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.431925 kubelet[2815]: W1112 20:46:33.431922 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.431981 kubelet[2815]: E1112 20:46:33.431936 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.432126 kubelet[2815]: E1112 20:46:33.432112 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.432126 kubelet[2815]: W1112 20:46:33.432122 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.432177 kubelet[2815]: E1112 20:46:33.432137 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.432487 kubelet[2815]: E1112 20:46:33.432443 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.432487 kubelet[2815]: W1112 20:46:33.432481 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.432595 kubelet[2815]: E1112 20:46:33.432509 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.432762 kubelet[2815]: E1112 20:46:33.432747 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.432762 kubelet[2815]: W1112 20:46:33.432759 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.432826 kubelet[2815]: E1112 20:46:33.432771 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.433008 kubelet[2815]: E1112 20:46:33.432982 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.433008 kubelet[2815]: W1112 20:46:33.432997 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.433008 kubelet[2815]: E1112 20:46:33.433007 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.433399 kubelet[2815]: E1112 20:46:33.433379 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.433399 kubelet[2815]: W1112 20:46:33.433390 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.433399 kubelet[2815]: E1112 20:46:33.433401 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:33.434342 kubelet[2815]: E1112 20:46:33.434315 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:46:33.434342 kubelet[2815]: W1112 20:46:33.434329 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:46:33.434342 kubelet[2815]: E1112 20:46:33.434342 2815 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:46:34.190483 containerd[1586]: time="2024-11-12T20:46:34.190401415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:34.191261 containerd[1586]: time="2024-11-12T20:46:34.191221074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:46:34.192826 containerd[1586]: time="2024-11-12T20:46:34.192778940Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:34.194950 containerd[1586]: time="2024-11-12T20:46:34.194901687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:34.195487 containerd[1586]: time="2024-11-12T20:46:34.195432464Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.234845029s" Nov 12 20:46:34.195487 containerd[1586]: time="2024-11-12T20:46:34.195479161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:46:34.197389 containerd[1586]: time="2024-11-12T20:46:34.197343102Z" level=info msg="CreateContainer within sandbox \"2ed3d844970727db1b2ee2bc7fb694b5058f30c852ab0aad1690272df1bea71d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:46:34.236775 containerd[1586]: time="2024-11-12T20:46:34.236690376Z" level=info msg="CreateContainer within sandbox \"2ed3d844970727db1b2ee2bc7fb694b5058f30c852ab0aad1690272df1bea71d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cecd50ecc2bdbe03818222c247aaf0fa52c70ffbe247b13919670f02f3135700\"" Nov 12 20:46:34.237821 containerd[1586]: time="2024-11-12T20:46:34.237781145Z" level=info msg="StartContainer for \"cecd50ecc2bdbe03818222c247aaf0fa52c70ffbe247b13919670f02f3135700\"" Nov 12 20:46:34.300761 containerd[1586]: time="2024-11-12T20:46:34.300706209Z" level=info msg="StartContainer for \"cecd50ecc2bdbe03818222c247aaf0fa52c70ffbe247b13919670f02f3135700\" returns successfully" Nov 12 20:46:34.303620 kubelet[2815]: E1112 20:46:34.303585 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pn8fl" podUID="71ac2d0f-163c-4690-9604-80f6d13fee6e" Nov 12 20:46:34.377033 kubelet[2815]: I1112 20:46:34.376991 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:46:34.402875 kubelet[2815]: E1112 20:46:34.377771 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:34.402875 kubelet[2815]: E1112 20:46:34.377966 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:34.717194 containerd[1586]: time="2024-11-12T20:46:34.715406333Z" level=info msg="shim disconnected" id=cecd50ecc2bdbe03818222c247aaf0fa52c70ffbe247b13919670f02f3135700 namespace=k8s.io Nov 12 20:46:34.717194 containerd[1586]: time="2024-11-12T20:46:34.717172600Z" level=warning msg="cleaning up after shim disconnected" id=cecd50ecc2bdbe03818222c247aaf0fa52c70ffbe247b13919670f02f3135700 namespace=k8s.io Nov 12 20:46:34.717194 containerd[1586]: time="2024-11-12T20:46:34.717185865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:34.966527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cecd50ecc2bdbe03818222c247aaf0fa52c70ffbe247b13919670f02f3135700-rootfs.mount: Deactivated successfully. Nov 12 20:46:35.380688 kubelet[2815]: E1112 20:46:35.380649 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:35.381541 containerd[1586]: time="2024-11-12T20:46:35.381422444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:46:36.300381 kubelet[2815]: E1112 20:46:36.300321 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pn8fl" podUID="71ac2d0f-163c-4690-9604-80f6d13fee6e" Nov 12 20:46:38.300517 kubelet[2815]: E1112 20:46:38.300468 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pn8fl" podUID="71ac2d0f-163c-4690-9604-80f6d13fee6e" Nov 12 20:46:38.839117 containerd[1586]: time="2024-11-12T20:46:38.839025813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:38.859100 containerd[1586]: time="2024-11-12T20:46:38.859010719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:46:38.879652 containerd[1586]: time="2024-11-12T20:46:38.879596243Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:38.947741 containerd[1586]: time="2024-11-12T20:46:38.947669877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:38.948633 containerd[1586]: time="2024-11-12T20:46:38.948589464Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 3.56710287s" Nov 12 20:46:38.948691 containerd[1586]: time="2024-11-12T20:46:38.948638166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:46:38.950809 containerd[1586]: time="2024-11-12T20:46:38.950771241Z" level=info msg="CreateContainer within sandbox \"2ed3d844970727db1b2ee2bc7fb694b5058f30c852ab0aad1690272df1bea71d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:46:39.065202 containerd[1586]: time="2024-11-12T20:46:39.065146373Z" level=info msg="CreateContainer within sandbox \"2ed3d844970727db1b2ee2bc7fb694b5058f30c852ab0aad1690272df1bea71d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"07abe03c7fea2f49cb6b7c951ce37893a64413947dcc23c5e148eb0e01caf632\"" Nov 12 20:46:39.065914 containerd[1586]: time="2024-11-12T20:46:39.065852269Z" level=info msg="StartContainer for \"07abe03c7fea2f49cb6b7c951ce37893a64413947dcc23c5e148eb0e01caf632\"" Nov 12 20:46:39.166363 containerd[1586]: time="2024-11-12T20:46:39.166215419Z" level=info msg="StartContainer for \"07abe03c7fea2f49cb6b7c951ce37893a64413947dcc23c5e148eb0e01caf632\" returns successfully" Nov 12 20:46:39.389329 kubelet[2815]: E1112 20:46:39.389277 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:40.301538 kubelet[2815]: E1112 20:46:40.301487 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pn8fl" podUID="71ac2d0f-163c-4690-9604-80f6d13fee6e" Nov 12 20:46:40.391761 kubelet[2815]: E1112 20:46:40.391669 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:40.950698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07abe03c7fea2f49cb6b7c951ce37893a64413947dcc23c5e148eb0e01caf632-rootfs.mount: Deactivated successfully. Nov 12 20:46:40.954689 containerd[1586]: time="2024-11-12T20:46:40.954620263Z" level=info msg="shim disconnected" id=07abe03c7fea2f49cb6b7c951ce37893a64413947dcc23c5e148eb0e01caf632 namespace=k8s.io Nov 12 20:46:40.954689 containerd[1586]: time="2024-11-12T20:46:40.954687660Z" level=warning msg="cleaning up after shim disconnected" id=07abe03c7fea2f49cb6b7c951ce37893a64413947dcc23c5e148eb0e01caf632 namespace=k8s.io Nov 12 20:46:40.955117 containerd[1586]: time="2024-11-12T20:46:40.954696937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:40.999503 kubelet[2815]: I1112 20:46:40.999468 2815 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:46:41.024842 kubelet[2815]: I1112 20:46:41.024785 2815 topology_manager.go:215] "Topology Admit Handler" podUID="02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3" podNamespace="kube-system" podName="coredns-76f75df574-5mwdx" Nov 12 20:46:41.026772 kubelet[2815]: I1112 20:46:41.026738 2815 topology_manager.go:215] "Topology Admit Handler" podUID="3b28f423-d59c-445f-922b-3b39379c4a87" podNamespace="kube-system" podName="coredns-76f75df574-z7dpt" Nov 12 20:46:41.029085 kubelet[2815]: I1112 20:46:41.027535 2815 topology_manager.go:215] "Topology Admit Handler" podUID="685f068e-e191-4943-9495-a2b63c195079" podNamespace="calico-apiserver" podName="calico-apiserver-f884fd4f8-sk7r7" Nov 12 20:46:41.029379 kubelet[2815]: I1112 20:46:41.029335 2815 topology_manager.go:215] "Topology Admit Handler" podUID="78ca05ac-4333-451e-baf1-754b7ff398b3" podNamespace="calico-system" podName="calico-kube-controllers-74c7666879-4rdd7" Nov 12 20:46:41.029871 kubelet[2815]: I1112 20:46:41.029845 2815 topology_manager.go:215] "Topology Admit Handler" podUID="d3f327c6-ad19-4f3c-afcb-be3bdd722d7e" podNamespace="calico-apiserver" podName="calico-apiserver-f884fd4f8-jqrc8" Nov 12 20:46:41.080354 kubelet[2815]: I1112 20:46:41.080280 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b28f423-d59c-445f-922b-3b39379c4a87-config-volume\") pod \"coredns-76f75df574-z7dpt\" (UID: \"3b28f423-d59c-445f-922b-3b39379c4a87\") " pod="kube-system/coredns-76f75df574-z7dpt" Nov 12 20:46:41.080547 kubelet[2815]: I1112 20:46:41.080377 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/685f068e-e191-4943-9495-a2b63c195079-calico-apiserver-certs\") pod \"calico-apiserver-f884fd4f8-sk7r7\" (UID: \"685f068e-e191-4943-9495-a2b63c195079\") " pod="calico-apiserver/calico-apiserver-f884fd4f8-sk7r7" Nov 12 20:46:41.080547 kubelet[2815]: I1112 20:46:41.080413 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k98j\" (UniqueName: \"kubernetes.io/projected/685f068e-e191-4943-9495-a2b63c195079-kube-api-access-9k98j\") pod \"calico-apiserver-f884fd4f8-sk7r7\" (UID: \"685f068e-e191-4943-9495-a2b63c195079\") " pod="calico-apiserver/calico-apiserver-f884fd4f8-sk7r7" Nov 12 20:46:41.080547 kubelet[2815]: I1112 20:46:41.080439 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88nlh\" (UniqueName: \"kubernetes.io/projected/3b28f423-d59c-445f-922b-3b39379c4a87-kube-api-access-88nlh\") pod \"coredns-76f75df574-z7dpt\" (UID: \"3b28f423-d59c-445f-922b-3b39379c4a87\") " pod="kube-system/coredns-76f75df574-z7dpt" Nov 12 20:46:41.080700 kubelet[2815]: I1112 20:46:41.080638 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ca05ac-4333-451e-baf1-754b7ff398b3-tigera-ca-bundle\") pod \"calico-kube-controllers-74c7666879-4rdd7\" (UID: \"78ca05ac-4333-451e-baf1-754b7ff398b3\") " pod="calico-system/calico-kube-controllers-74c7666879-4rdd7" Nov 12 20:46:41.080746 kubelet[2815]: I1112 20:46:41.080738 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3f327c6-ad19-4f3c-afcb-be3bdd722d7e-calico-apiserver-certs\") pod \"calico-apiserver-f884fd4f8-jqrc8\" (UID: \"d3f327c6-ad19-4f3c-afcb-be3bdd722d7e\") " pod="calico-apiserver/calico-apiserver-f884fd4f8-jqrc8" Nov 12 20:46:41.080803 kubelet[2815]: I1112 20:46:41.080786 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3-config-volume\") pod \"coredns-76f75df574-5mwdx\" (UID: \"02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3\") " pod="kube-system/coredns-76f75df574-5mwdx" Nov 12 20:46:41.080845 kubelet[2815]: I1112 20:46:41.080828 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grb8c\" (UniqueName: \"kubernetes.io/projected/d3f327c6-ad19-4f3c-afcb-be3bdd722d7e-kube-api-access-grb8c\") pod \"calico-apiserver-f884fd4f8-jqrc8\" (UID: \"d3f327c6-ad19-4f3c-afcb-be3bdd722d7e\") " pod="calico-apiserver/calico-apiserver-f884fd4f8-jqrc8" Nov 12 20:46:41.080891 kubelet[2815]: I1112 20:46:41.080877 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5svc\" (UniqueName: \"kubernetes.io/projected/78ca05ac-4333-451e-baf1-754b7ff398b3-kube-api-access-h5svc\") pod \"calico-kube-controllers-74c7666879-4rdd7\" (UID: \"78ca05ac-4333-451e-baf1-754b7ff398b3\") " pod="calico-system/calico-kube-controllers-74c7666879-4rdd7" Nov 12 20:46:41.080943 kubelet[2815]: I1112 20:46:41.080922 2815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgwxm\" (UniqueName: \"kubernetes.io/projected/02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3-kube-api-access-jgwxm\") pod \"coredns-76f75df574-5mwdx\" (UID: \"02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3\") " pod="kube-system/coredns-76f75df574-5mwdx" Nov 12 20:46:41.335201 kubelet[2815]: E1112 20:46:41.335167 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:41.335952 containerd[1586]: time="2024-11-12T20:46:41.335890456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5mwdx,Uid:02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3,Namespace:kube-system,Attempt:0,}" Nov 12 20:46:41.338907 kubelet[2815]: E1112 20:46:41.338710 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:41.340476 containerd[1586]: time="2024-11-12T20:46:41.339087479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z7dpt,Uid:3b28f423-d59c-445f-922b-3b39379c4a87,Namespace:kube-system,Attempt:0,}" Nov 12 20:46:41.344977 containerd[1586]: time="2024-11-12T20:46:41.344944947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f884fd4f8-sk7r7,Uid:685f068e-e191-4943-9495-a2b63c195079,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:46:41.352510 containerd[1586]: time="2024-11-12T20:46:41.352477319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f884fd4f8-jqrc8,Uid:d3f327c6-ad19-4f3c-afcb-be3bdd722d7e,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:46:41.353945 containerd[1586]: time="2024-11-12T20:46:41.353914097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c7666879-4rdd7,Uid:78ca05ac-4333-451e-baf1-754b7ff398b3,Namespace:calico-system,Attempt:0,}" Nov 12 20:46:41.393873 kubelet[2815]: E1112 20:46:41.393847 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:41.394656 containerd[1586]: time="2024-11-12T20:46:41.394623892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:46:41.965905 systemd[1]: Started sshd@7-10.0.0.56:22-10.0.0.1:51608.service - OpenSSH per-connection server daemon (10.0.0.1:51608). Nov 12 20:46:42.014362 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 51608 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:42.017002 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:42.028961 systemd-logind[1564]: New session 8 of user core. Nov 12 20:46:42.037989 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:46:42.128427 containerd[1586]: time="2024-11-12T20:46:42.128356011Z" level=error msg="Failed to destroy network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.129705 containerd[1586]: time="2024-11-12T20:46:42.129583673Z" level=error msg="encountered an error cleaning up failed sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.129705 containerd[1586]: time="2024-11-12T20:46:42.129675572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z7dpt,Uid:3b28f423-d59c-445f-922b-3b39379c4a87,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.130186 kubelet[2815]: E1112 20:46:42.130163 2815 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.131286 kubelet[2815]: E1112 20:46:42.130343 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z7dpt" Nov 12 20:46:42.131286 kubelet[2815]: E1112 20:46:42.130368 2815 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z7dpt" Nov 12 20:46:42.131286 kubelet[2815]: E1112 20:46:42.130424 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z7dpt_kube-system(3b28f423-d59c-445f-922b-3b39379c4a87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z7dpt_kube-system(3b28f423-d59c-445f-922b-3b39379c4a87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z7dpt" podUID="3b28f423-d59c-445f-922b-3b39379c4a87" Nov 12 20:46:42.132273 containerd[1586]: time="2024-11-12T20:46:42.131803543Z" level=error msg="Failed to destroy network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.132273 containerd[1586]: time="2024-11-12T20:46:42.132238363Z" level=error msg="encountered an error cleaning up failed sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.132401 containerd[1586]: time="2024-11-12T20:46:42.132302007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f884fd4f8-sk7r7,Uid:685f068e-e191-4943-9495-a2b63c195079,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.133872 kubelet[2815]: E1112 20:46:42.133837 2815 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.133947 kubelet[2815]: E1112 20:46:42.133920 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f884fd4f8-sk7r7" Nov 12 20:46:42.133978 kubelet[2815]: E1112 20:46:42.133952 2815 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f884fd4f8-sk7r7" Nov 12 20:46:42.134054 kubelet[2815]: E1112 20:46:42.134026 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f884fd4f8-sk7r7_calico-apiserver(685f068e-e191-4943-9495-a2b63c195079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f884fd4f8-sk7r7_calico-apiserver(685f068e-e191-4943-9495-a2b63c195079)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f884fd4f8-sk7r7" podUID="685f068e-e191-4943-9495-a2b63c195079" Nov 12 20:46:42.136984 containerd[1586]: time="2024-11-12T20:46:42.136914429Z" level=error msg="Failed to destroy network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.137489 containerd[1586]: time="2024-11-12T20:46:42.137416699Z" level=error msg="encountered an error cleaning up failed sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.137582 containerd[1586]: time="2024-11-12T20:46:42.137487176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c7666879-4rdd7,Uid:78ca05ac-4333-451e-baf1-754b7ff398b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.138076 kubelet[2815]: E1112 20:46:42.137783 2815 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.138076 kubelet[2815]: E1112 20:46:42.137833 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c7666879-4rdd7" Nov 12 20:46:42.138076 kubelet[2815]: E1112 20:46:42.137856 2815 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c7666879-4rdd7" Nov 12 20:46:42.138212 kubelet[2815]: E1112 20:46:42.137906 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74c7666879-4rdd7_calico-system(78ca05ac-4333-451e-baf1-754b7ff398b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74c7666879-4rdd7_calico-system(78ca05ac-4333-451e-baf1-754b7ff398b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c7666879-4rdd7" podUID="78ca05ac-4333-451e-baf1-754b7ff398b3" Nov 12 20:46:42.138288 containerd[1586]: time="2024-11-12T20:46:42.138226855Z" level=error msg="Failed to destroy network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.138765 containerd[1586]: time="2024-11-12T20:46:42.138737331Z" level=error msg="encountered an error cleaning up failed sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.138826 containerd[1586]: time="2024-11-12T20:46:42.138793891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5mwdx,Uid:02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.139212 kubelet[2815]: E1112 20:46:42.139097 2815 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.139440 kubelet[2815]: E1112 20:46:42.139300 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5mwdx" Nov 12 20:46:42.139440 kubelet[2815]: E1112 20:46:42.139327 2815 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5mwdx" Nov 12 20:46:42.139722 kubelet[2815]: E1112 20:46:42.139639 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5mwdx_kube-system(02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5mwdx_kube-system(02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5mwdx" podUID="02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3" Nov 12 20:46:42.145152 containerd[1586]: time="2024-11-12T20:46:42.145103081Z" level=error msg="Failed to destroy network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.145612 containerd[1586]: time="2024-11-12T20:46:42.145544334Z" level=error msg="encountered an error cleaning up failed sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.145612 containerd[1586]: time="2024-11-12T20:46:42.145589671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f884fd4f8-jqrc8,Uid:d3f327c6-ad19-4f3c-afcb-be3bdd722d7e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.145875 kubelet[2815]: E1112 20:46:42.145843 2815 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.145920 kubelet[2815]: E1112 20:46:42.145896 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f884fd4f8-jqrc8" Nov 12 20:46:42.145960 kubelet[2815]: E1112 20:46:42.145920 2815 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f884fd4f8-jqrc8" Nov 12 20:46:42.146008 kubelet[2815]: E1112 20:46:42.145983 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f884fd4f8-jqrc8_calico-apiserver(d3f327c6-ad19-4f3c-afcb-be3bdd722d7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f884fd4f8-jqrc8_calico-apiserver(d3f327c6-ad19-4f3c-afcb-be3bdd722d7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f884fd4f8-jqrc8" podUID="d3f327c6-ad19-4f3c-afcb-be3bdd722d7e" Nov 12 20:46:42.198006 sshd[3546]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:42.203100 systemd[1]: sshd@7-10.0.0.56:22-10.0.0.1:51608.service: Deactivated successfully. Nov 12 20:46:42.205906 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:46:42.207340 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:46:42.208377 systemd-logind[1564]: Removed session 8. Nov 12 20:46:42.303830 containerd[1586]: time="2024-11-12T20:46:42.303712349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pn8fl,Uid:71ac2d0f-163c-4690-9604-80f6d13fee6e,Namespace:calico-system,Attempt:0,}" Nov 12 20:46:42.379340 containerd[1586]: time="2024-11-12T20:46:42.379265720Z" level=error msg="Failed to destroy network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.379777 containerd[1586]: time="2024-11-12T20:46:42.379744325Z" level=error msg="encountered an error cleaning up failed sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.379830 containerd[1586]: time="2024-11-12T20:46:42.379802828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pn8fl,Uid:71ac2d0f-163c-4690-9604-80f6d13fee6e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.380114 kubelet[2815]: E1112 20:46:42.380078 2815 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.380179 kubelet[2815]: E1112 20:46:42.380147 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pn8fl" Nov 12 20:46:42.380179 kubelet[2815]: E1112 20:46:42.380168 2815 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pn8fl" Nov 12 20:46:42.380236 kubelet[2815]: E1112 20:46:42.380229 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pn8fl_calico-system(71ac2d0f-163c-4690-9604-80f6d13fee6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pn8fl_calico-system(71ac2d0f-163c-4690-9604-80f6d13fee6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pn8fl" podUID="71ac2d0f-163c-4690-9604-80f6d13fee6e" Nov 12 20:46:42.396686 kubelet[2815]: I1112 20:46:42.396639 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:46:42.398376 containerd[1586]: time="2024-11-12T20:46:42.397445458Z" level=info msg="StopPodSandbox for \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\"" Nov 12 20:46:42.398376 containerd[1586]: time="2024-11-12T20:46:42.397720170Z" level=info msg="Ensure that sandbox f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4 in task-service has been cleanup successfully" Nov 12 20:46:42.398376 containerd[1586]: time="2024-11-12T20:46:42.397961295Z" level=info msg="StopPodSandbox for \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\"" Nov 12 20:46:42.398376 containerd[1586]: time="2024-11-12T20:46:42.398119081Z" level=info msg="Ensure that sandbox db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef in task-service has been cleanup successfully" Nov 12 20:46:42.398599 kubelet[2815]: I1112 20:46:42.397553 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:46:42.399221 kubelet[2815]: I1112 20:46:42.399193 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:46:42.400019 containerd[1586]: time="2024-11-12T20:46:42.399791232Z" level=info msg="StopPodSandbox for \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\"" Nov 12 20:46:42.400019 containerd[1586]: time="2024-11-12T20:46:42.399915001Z" level=info msg="Ensure that sandbox af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74 in task-service has been cleanup successfully" Nov 12 20:46:42.403700 kubelet[2815]: I1112 20:46:42.403644 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:46:42.404233 containerd[1586]: time="2024-11-12T20:46:42.404202275Z" level=info msg="StopPodSandbox for \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\"" Nov 12 20:46:42.404535 containerd[1586]: time="2024-11-12T20:46:42.404376442Z" level=info msg="Ensure that sandbox 870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e in task-service has been cleanup successfully" Nov 12 20:46:42.409273 kubelet[2815]: I1112 20:46:42.408864 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:46:42.409399 containerd[1586]: time="2024-11-12T20:46:42.409364349Z" level=info msg="StopPodSandbox for \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\"" Nov 12 20:46:42.410090 containerd[1586]: time="2024-11-12T20:46:42.410067949Z" level=info msg="Ensure that sandbox 8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6 in task-service has been cleanup successfully" Nov 12 20:46:42.412982 kubelet[2815]: I1112 20:46:42.412951 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:46:42.414022 containerd[1586]: time="2024-11-12T20:46:42.413990077Z" level=info msg="StopPodSandbox for \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\"" Nov 12 20:46:42.414716 containerd[1586]: time="2024-11-12T20:46:42.414681574Z" level=info msg="Ensure that sandbox e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7 in task-service has been cleanup successfully" Nov 12 20:46:42.453603 containerd[1586]: time="2024-11-12T20:46:42.453316057Z" level=error msg="StopPodSandbox for \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\" failed" error="failed to destroy network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.453768 kubelet[2815]: E1112 20:46:42.453723 2815 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:46:42.453819 kubelet[2815]: E1112 20:46:42.453811 2815 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4"} Nov 12 20:46:42.453880 kubelet[2815]: E1112 20:46:42.453847 2815 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71ac2d0f-163c-4690-9604-80f6d13fee6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:42.453963 kubelet[2815]: E1112 20:46:42.453882 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71ac2d0f-163c-4690-9604-80f6d13fee6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pn8fl" podUID="71ac2d0f-163c-4690-9604-80f6d13fee6e" Nov 12 20:46:42.458753 containerd[1586]: time="2024-11-12T20:46:42.458633120Z" level=error msg="StopPodSandbox for \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\" failed" error="failed to destroy network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.459349 kubelet[2815]: E1112 20:46:42.459163 2815 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:46:42.459349 kubelet[2815]: E1112 20:46:42.459218 2815 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef"} Nov 12 20:46:42.459349 kubelet[2815]: E1112 20:46:42.459259 2815 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"685f068e-e191-4943-9495-a2b63c195079\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:42.459349 kubelet[2815]: E1112 20:46:42.459307 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"685f068e-e191-4943-9495-a2b63c195079\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f884fd4f8-sk7r7" podUID="685f068e-e191-4943-9495-a2b63c195079" Nov 12 20:46:42.466005 containerd[1586]: time="2024-11-12T20:46:42.465973845Z" level=error msg="StopPodSandbox for \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\" failed" error="failed to destroy network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.466603 kubelet[2815]: E1112 20:46:42.466586 2815 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:46:42.466901 kubelet[2815]: E1112 20:46:42.466796 2815 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6"} Nov 12 20:46:42.466901 kubelet[2815]: E1112 20:46:42.466844 2815 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:42.466901 kubelet[2815]: E1112 20:46:42.466872 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5mwdx" podUID="02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3" Nov 12 20:46:42.467928 containerd[1586]: time="2024-11-12T20:46:42.467798791Z" level=error msg="StopPodSandbox for \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\" failed" error="failed to destroy network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.468205 kubelet[2815]: E1112 20:46:42.468184 2815 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:46:42.468307 kubelet[2815]: E1112 20:46:42.468212 2815 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e"} Nov 12 20:46:42.468307 kubelet[2815]: E1112 20:46:42.468292 2815 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b28f423-d59c-445f-922b-3b39379c4a87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:42.468409 kubelet[2815]: E1112 20:46:42.468315 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b28f423-d59c-445f-922b-3b39379c4a87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z7dpt" podUID="3b28f423-d59c-445f-922b-3b39379c4a87" Nov 12 20:46:42.469715 containerd[1586]: time="2024-11-12T20:46:42.469629719Z" level=error msg="StopPodSandbox for \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\" failed" error="failed to destroy network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.469936 kubelet[2815]: E1112 20:46:42.469915 2815 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:46:42.469981 kubelet[2815]: E1112 20:46:42.469941 2815 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7"} Nov 12 20:46:42.469981 kubelet[2815]: E1112 20:46:42.469969 2815 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3f327c6-ad19-4f3c-afcb-be3bdd722d7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:42.470061 kubelet[2815]: E1112 20:46:42.469992 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3f327c6-ad19-4f3c-afcb-be3bdd722d7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f884fd4f8-jqrc8" podUID="d3f327c6-ad19-4f3c-afcb-be3bdd722d7e" Nov 12 20:46:42.470507 containerd[1586]: time="2024-11-12T20:46:42.470449903Z" level=error msg="StopPodSandbox for \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\" failed" error="failed to destroy network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:46:42.470729 kubelet[2815]: E1112 20:46:42.470707 2815 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:46:42.470780 kubelet[2815]: E1112 20:46:42.470734 2815 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74"} Nov 12 20:46:42.470780 kubelet[2815]: E1112 20:46:42.470761 2815 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78ca05ac-4333-451e-baf1-754b7ff398b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:46:42.470865 kubelet[2815]: E1112 20:46:42.470782 2815 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78ca05ac-4333-451e-baf1-754b7ff398b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c7666879-4rdd7" podUID="78ca05ac-4333-451e-baf1-754b7ff398b3" Nov 12 20:46:42.984441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74-shm.mount: Deactivated successfully. Nov 12 20:46:42.984699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef-shm.mount: Deactivated successfully. Nov 12 20:46:42.984888 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e-shm.mount: Deactivated successfully. Nov 12 20:46:42.985090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6-shm.mount: Deactivated successfully. Nov 12 20:46:42.985272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7-shm.mount: Deactivated successfully. Nov 12 20:46:46.214587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1008711534.mount: Deactivated successfully. Nov 12 20:46:47.210716 systemd[1]: Started sshd@8-10.0.0.56:22-10.0.0.1:55678.service - OpenSSH per-connection server daemon (10.0.0.1:55678). Nov 12 20:46:48.495018 kubelet[2815]: E1112 20:46:48.494778 2815 kubelet.go:2503] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.194s" Nov 12 20:46:48.502715 containerd[1586]: time="2024-11-12T20:46:48.502609588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:48.504309 containerd[1586]: time="2024-11-12T20:46:48.504245134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:46:48.506192 containerd[1586]: time="2024-11-12T20:46:48.506143697Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:48.511038 containerd[1586]: time="2024-11-12T20:46:48.510940153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:48.511869 containerd[1586]: time="2024-11-12T20:46:48.511792764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 7.11711531s" Nov 12 20:46:48.511869 containerd[1586]: time="2024-11-12T20:46:48.511867037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:46:48.515533 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 55678 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:48.524644 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:48.525628 containerd[1586]: time="2024-11-12T20:46:48.524804925Z" level=info msg="CreateContainer within sandbox \"2ed3d844970727db1b2ee2bc7fb694b5058f30c852ab0aad1690272df1bea71d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:46:48.530206 systemd-logind[1564]: New session 9 of user core. Nov 12 20:46:48.535841 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:46:48.557969 containerd[1586]: time="2024-11-12T20:46:48.557893298Z" level=info msg="CreateContainer within sandbox \"2ed3d844970727db1b2ee2bc7fb694b5058f30c852ab0aad1690272df1bea71d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7aabc83e395c932804948db21033626a1cbb331f1d2b2b2b9d4955f8aab1a9c9\"" Nov 12 20:46:48.558984 containerd[1586]: time="2024-11-12T20:46:48.558937868Z" level=info msg="StartContainer for \"7aabc83e395c932804948db21033626a1cbb331f1d2b2b2b9d4955f8aab1a9c9\"" Nov 12 20:46:48.683751 sshd[3918]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:48.687147 systemd[1]: sshd@8-10.0.0.56:22-10.0.0.1:55678.service: Deactivated successfully. Nov 12 20:46:48.698123 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:46:48.699051 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:46:48.704549 systemd-logind[1564]: Removed session 9. Nov 12 20:46:49.101403 containerd[1586]: time="2024-11-12T20:46:49.101312523Z" level=info msg="StartContainer for \"7aabc83e395c932804948db21033626a1cbb331f1d2b2b2b9d4955f8aab1a9c9\" returns successfully" Nov 12 20:46:49.131856 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:46:49.133065 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:46:49.431340 kubelet[2815]: E1112 20:46:49.431180 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:49.444748 kubelet[2815]: I1112 20:46:49.444683 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-b4n8t" podStartSLOduration=1.906901986 podStartE2EDuration="19.444636827s" podCreationTimestamp="2024-11-12 20:46:30 +0000 UTC" firstStartedPulling="2024-11-12 20:46:30.974552634 +0000 UTC m=+18.782421815" lastFinishedPulling="2024-11-12 20:46:48.512287475 +0000 UTC m=+36.320156656" observedRunningTime="2024-11-12 20:46:49.444168858 +0000 UTC m=+37.252038039" watchObservedRunningTime="2024-11-12 20:46:49.444636827 +0000 UTC m=+37.252506039" Nov 12 20:46:50.435530 kubelet[2815]: E1112 20:46:50.435496 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:53.693836 systemd[1]: Started sshd@9-10.0.0.56:22-10.0.0.1:55682.service - OpenSSH per-connection server daemon (10.0.0.1:55682). Nov 12 20:46:53.862164 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 55682 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:53.864514 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:53.870278 systemd-logind[1564]: New session 10 of user core. Nov 12 20:46:53.876884 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:46:54.048351 sshd[4202]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:54.053775 systemd[1]: sshd@9-10.0.0.56:22-10.0.0.1:55682.service: Deactivated successfully. Nov 12 20:46:54.057035 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:46:54.057080 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:46:54.058630 systemd-logind[1564]: Removed session 10. Nov 12 20:46:55.301772 containerd[1586]: time="2024-11-12T20:46:55.301674710Z" level=info msg="StopPodSandbox for \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\"" Nov 12 20:46:55.302391 containerd[1586]: time="2024-11-12T20:46:55.301855356Z" level=info msg="StopPodSandbox for \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\"" Nov 12 20:46:55.302391 containerd[1586]: time="2024-11-12T20:46:55.302326628Z" level=info msg="StopPodSandbox for \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\"" Nov 12 20:46:55.303014 containerd[1586]: time="2024-11-12T20:46:55.302970411Z" level=info msg="StopPodSandbox for \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\"" Nov 12 20:46:55.304349 containerd[1586]: time="2024-11-12T20:46:55.304288316Z" level=info msg="StopPodSandbox for \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\"" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.398 [INFO][4331] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.399 [INFO][4331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" iface="eth0" netns="/var/run/netns/cni-ce59e759-4245-3e09-b829-a61bb665bc73" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.399 [INFO][4331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" iface="eth0" netns="/var/run/netns/cni-ce59e759-4245-3e09-b829-a61bb665bc73" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.399 [INFO][4331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" iface="eth0" netns="/var/run/netns/cni-ce59e759-4245-3e09-b829-a61bb665bc73" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.399 [INFO][4331] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.399 [INFO][4331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.478 [INFO][4382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" HandleID="k8s-pod-network.db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.479 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.479 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.487 [WARNING][4382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" HandleID="k8s-pod-network.db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.487 [INFO][4382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" HandleID="k8s-pod-network.db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.489 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:55.494602 containerd[1586]: 2024-11-12 20:46:55.492 [INFO][4331] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:46:55.495377 containerd[1586]: time="2024-11-12T20:46:55.494826190Z" level=info msg="TearDown network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\" successfully" Nov 12 20:46:55.495377 containerd[1586]: time="2024-11-12T20:46:55.494862891Z" level=info msg="StopPodSandbox for \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\" returns successfully" Nov 12 20:46:55.496029 containerd[1586]: time="2024-11-12T20:46:55.495994409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f884fd4f8-sk7r7,Uid:685f068e-e191-4943-9495-a2b63c195079,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:46:55.500810 systemd[1]: run-netns-cni\x2dce59e759\x2d4245\x2d3e09\x2db829\x2da61bb665bc73.mount: Deactivated successfully. Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.406 [INFO][4340] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.406 [INFO][4340] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" iface="eth0" netns="/var/run/netns/cni-012cdbd2-d830-781e-7b72-184ef2d3e174" Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.406 [INFO][4340] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" iface="eth0" netns="/var/run/netns/cni-012cdbd2-d830-781e-7b72-184ef2d3e174" Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.407 [INFO][4340] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" iface="eth0" netns="/var/run/netns/cni-012cdbd2-d830-781e-7b72-184ef2d3e174" Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.407 [INFO][4340] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.407 [INFO][4340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.478 [INFO][4383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" HandleID="k8s-pod-network.870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.478 [INFO][4383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.489 [INFO][4383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.499 [WARNING][4383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" HandleID="k8s-pod-network.870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.499 [INFO][4383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" HandleID="k8s-pod-network.870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.501 [INFO][4383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:55.509827 containerd[1586]: 2024-11-12 20:46:55.506 [INFO][4340] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:46:55.511725 containerd[1586]: time="2024-11-12T20:46:55.511633326Z" level=info msg="TearDown network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\" successfully" Nov 12 20:46:55.511725 containerd[1586]: time="2024-11-12T20:46:55.511676730Z" level=info msg="StopPodSandbox for \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\" returns successfully" Nov 12 20:46:55.512186 kubelet[2815]: E1112 20:46:55.512145 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:55.513376 systemd[1]: run-netns-cni\x2d012cdbd2\x2dd830\x2d781e\x2d7b72\x2d184ef2d3e174.mount: Deactivated successfully. Nov 12 20:46:55.515319 containerd[1586]: time="2024-11-12T20:46:55.515012360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z7dpt,Uid:3b28f423-d59c-445f-922b-3b39379c4a87,Namespace:kube-system,Attempt:1,}" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.397 [INFO][4346] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.401 [INFO][4346] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" iface="eth0" netns="/var/run/netns/cni-f0620635-ad35-8ddd-e358-bd6c8389e733" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.407 [INFO][4346] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" iface="eth0" netns="/var/run/netns/cni-f0620635-ad35-8ddd-e358-bd6c8389e733" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.408 [INFO][4346] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" iface="eth0" netns="/var/run/netns/cni-f0620635-ad35-8ddd-e358-bd6c8389e733" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.408 [INFO][4346] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.408 [INFO][4346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.478 [INFO][4384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" HandleID="k8s-pod-network.af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.479 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.501 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.507 [WARNING][4384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" HandleID="k8s-pod-network.af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.507 [INFO][4384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" HandleID="k8s-pod-network.af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.509 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:55.520112 containerd[1586]: 2024-11-12 20:46:55.517 [INFO][4346] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:46:55.520690 containerd[1586]: time="2024-11-12T20:46:55.520350675Z" level=info msg="TearDown network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\" successfully" Nov 12 20:46:55.520690 containerd[1586]: time="2024-11-12T20:46:55.520392576Z" level=info msg="StopPodSandbox for \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\" returns successfully" Nov 12 20:46:55.521727 containerd[1586]: time="2024-11-12T20:46:55.521691293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c7666879-4rdd7,Uid:78ca05ac-4333-451e-baf1-754b7ff398b3,Namespace:calico-system,Attempt:1,}" Nov 12 20:46:55.524289 systemd[1]: run-netns-cni\x2df0620635\x2dad35\x2d8ddd\x2de358\x2dbd6c8389e733.mount: Deactivated successfully. Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.394 [INFO][4347] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.396 [INFO][4347] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" iface="eth0" netns="/var/run/netns/cni-632b2fbb-14ad-f852-f9d7-1985931dd4db" Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.397 [INFO][4347] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" iface="eth0" netns="/var/run/netns/cni-632b2fbb-14ad-f852-f9d7-1985931dd4db" Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.399 [INFO][4347] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" iface="eth0" netns="/var/run/netns/cni-632b2fbb-14ad-f852-f9d7-1985931dd4db" Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.399 [INFO][4347] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.399 [INFO][4347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.479 [INFO][4381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" HandleID="k8s-pod-network.8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.479 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.509 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.648 [WARNING][4381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" HandleID="k8s-pod-network.8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.648 [INFO][4381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" HandleID="k8s-pod-network.8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.784 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:55.789191 containerd[1586]: 2024-11-12 20:46:55.787 [INFO][4347] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:46:55.789808 containerd[1586]: time="2024-11-12T20:46:55.789376172Z" level=info msg="TearDown network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\" successfully" Nov 12 20:46:55.789808 containerd[1586]: time="2024-11-12T20:46:55.789405087Z" level=info msg="StopPodSandbox for \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\" returns successfully" Nov 12 20:46:55.789959 kubelet[2815]: E1112 20:46:55.789794 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:55.790893 containerd[1586]: time="2024-11-12T20:46:55.790630474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5mwdx,Uid:02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3,Namespace:kube-system,Attempt:1,}" Nov 12 20:46:55.793576 systemd[1]: run-netns-cni\x2d632b2fbb\x2d14ad\x2df852\x2df9d7\x2d1985931dd4db.mount: Deactivated successfully. Nov 12 20:46:56.038679 kubelet[2815]: I1112 20:46:56.038593 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:46:56.039562 kubelet[2815]: E1112 20:46:56.039475 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.411 [INFO][4360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.411 [INFO][4360] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" iface="eth0" netns="/var/run/netns/cni-c86dfb75-ab7e-11e6-fc1f-3a367b64c274" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.412 [INFO][4360] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" iface="eth0" netns="/var/run/netns/cni-c86dfb75-ab7e-11e6-fc1f-3a367b64c274" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.412 [INFO][4360] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" iface="eth0" netns="/var/run/netns/cni-c86dfb75-ab7e-11e6-fc1f-3a367b64c274" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.412 [INFO][4360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.412 [INFO][4360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.478 [INFO][4387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" HandleID="k8s-pod-network.e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.479 [INFO][4387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.784 [INFO][4387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.868 [WARNING][4387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" HandleID="k8s-pod-network.e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:55.870 [INFO][4387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" HandleID="k8s-pod-network.e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:56.081 [INFO][4387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:56.087248 containerd[1586]: 2024-11-12 20:46:56.084 [INFO][4360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:46:56.087759 containerd[1586]: time="2024-11-12T20:46:56.087410219Z" level=info msg="TearDown network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\" successfully" Nov 12 20:46:56.087759 containerd[1586]: time="2024-11-12T20:46:56.087437361Z" level=info msg="StopPodSandbox for \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\" returns successfully" Nov 12 20:46:56.088135 containerd[1586]: time="2024-11-12T20:46:56.088108937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f884fd4f8-jqrc8,Uid:d3f327c6-ad19-4f3c-afcb-be3bdd722d7e,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:46:56.090226 systemd[1]: run-netns-cni\x2dc86dfb75\x2dab7e\x2d11e6\x2dfc1f\x2d3a367b64c274.mount: Deactivated successfully. Nov 12 20:46:56.446439 kubelet[2815]: E1112 20:46:56.446297 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:56.826747 systemd-networkd[1250]: cali80aa84aadee: Link UP Nov 12 20:46:56.828738 systemd-networkd[1250]: cali80aa84aadee: Gained carrier Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.697 [INFO][4451] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.709 [INFO][4451] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0 calico-apiserver-f884fd4f8- calico-apiserver 685f068e-e191-4943-9495-a2b63c195079 888 0 2024-11-12 20:46:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f884fd4f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f884fd4f8-sk7r7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali80aa84aadee [] []}} ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-sk7r7" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.709 [INFO][4451] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-sk7r7" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.753 [INFO][4514] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" HandleID="k8s-pod-network.fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.767 [INFO][4514] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" HandleID="k8s-pod-network.fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e1780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f884fd4f8-sk7r7", "timestamp":"2024-11-12 20:46:56.753345547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.767 [INFO][4514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.767 [INFO][4514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.767 [INFO][4514] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.770 [INFO][4514] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" host="localhost" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.775 [INFO][4514] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.782 [INFO][4514] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.787 [INFO][4514] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.789 [INFO][4514] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.789 [INFO][4514] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" host="localhost" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.790 [INFO][4514] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.796 [INFO][4514] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" host="localhost" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.802 [INFO][4514] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" host="localhost" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.802 [INFO][4514] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" host="localhost" Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.802 [INFO][4514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:56.859552 containerd[1586]: 2024-11-12 20:46:56.802 [INFO][4514] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" HandleID="k8s-pod-network.fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:56.860928 containerd[1586]: 2024-11-12 20:46:56.813 [INFO][4451] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-sk7r7" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0", GenerateName:"calico-apiserver-f884fd4f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"685f068e-e191-4943-9495-a2b63c195079", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f884fd4f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f884fd4f8-sk7r7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80aa84aadee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:56.860928 containerd[1586]: 2024-11-12 20:46:56.814 [INFO][4451] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-sk7r7" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:56.860928 containerd[1586]: 2024-11-12 20:46:56.814 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80aa84aadee ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-sk7r7" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:56.860928 containerd[1586]: 2024-11-12 20:46:56.829 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-sk7r7" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:56.860928 containerd[1586]: 2024-11-12 20:46:56.830 [INFO][4451] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-sk7r7" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0", GenerateName:"calico-apiserver-f884fd4f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"685f068e-e191-4943-9495-a2b63c195079", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f884fd4f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c", Pod:"calico-apiserver-f884fd4f8-sk7r7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80aa84aadee", MAC:"e6:bd:51:75:8a:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:56.860928 containerd[1586]: 2024-11-12 20:46:56.842 [INFO][4451] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-sk7r7" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:46:56.901348 systemd-networkd[1250]: califf34b2f8be5: Link UP Nov 12 20:46:56.903013 systemd-networkd[1250]: califf34b2f8be5: Gained carrier Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.740 [INFO][4464] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.757 [INFO][4464] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--z7dpt-eth0 coredns-76f75df574- kube-system 3b28f423-d59c-445f-922b-3b39379c4a87 890 0 2024-11-12 20:46:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-z7dpt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf34b2f8be5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Namespace="kube-system" Pod="coredns-76f75df574-z7dpt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z7dpt-" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.757 [INFO][4464] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Namespace="kube-system" Pod="coredns-76f75df574-z7dpt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.830 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" HandleID="k8s-pod-network.2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.848 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" HandleID="k8s-pod-network.2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-z7dpt", "timestamp":"2024-11-12 20:46:56.830037139 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.848 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.848 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.848 [INFO][4537] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.854 [INFO][4537] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" host="localhost" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.860 [INFO][4537] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.865 [INFO][4537] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.868 [INFO][4537] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.870 [INFO][4537] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.870 [INFO][4537] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" host="localhost" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.872 [INFO][4537] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56 Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.883 [INFO][4537] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" host="localhost" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.890 [INFO][4537] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" host="localhost" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.890 [INFO][4537] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" host="localhost" Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.890 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:56.938316 containerd[1586]: 2024-11-12 20:46:56.890 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" HandleID="k8s-pod-network.2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:56.939164 containerd[1586]: 2024-11-12 20:46:56.895 [INFO][4464] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Namespace="kube-system" Pod="coredns-76f75df574-z7dpt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z7dpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z7dpt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3b28f423-d59c-445f-922b-3b39379c4a87", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-z7dpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf34b2f8be5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:56.939164 containerd[1586]: 2024-11-12 20:46:56.895 [INFO][4464] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Namespace="kube-system" Pod="coredns-76f75df574-z7dpt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:56.939164 containerd[1586]: 2024-11-12 20:46:56.895 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf34b2f8be5 ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Namespace="kube-system" Pod="coredns-76f75df574-z7dpt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:56.939164 containerd[1586]: 2024-11-12 20:46:56.905 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Namespace="kube-system" Pod="coredns-76f75df574-z7dpt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:56.939164 containerd[1586]: 2024-11-12 20:46:56.907 [INFO][4464] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Namespace="kube-system" Pod="coredns-76f75df574-z7dpt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z7dpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z7dpt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3b28f423-d59c-445f-922b-3b39379c4a87", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56", Pod:"coredns-76f75df574-z7dpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf34b2f8be5", MAC:"2e:e9:8a:49:e1:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:56.939164 containerd[1586]: 2024-11-12 20:46:56.924 [INFO][4464] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56" Namespace="kube-system" Pod="coredns-76f75df574-z7dpt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:46:56.946114 containerd[1586]: time="2024-11-12T20:46:56.945962498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:56.947134 containerd[1586]: time="2024-11-12T20:46:56.946081535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:56.947134 containerd[1586]: time="2024-11-12T20:46:56.946701984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:56.948151 containerd[1586]: time="2024-11-12T20:46:56.947706016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:56.964448 systemd-networkd[1250]: calibfe313825c3: Link UP Nov 12 20:46:56.967333 systemd-networkd[1250]: calibfe313825c3: Gained carrier Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.761 [INFO][4470] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.776 [INFO][4470] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0 calico-kube-controllers-74c7666879- calico-system 78ca05ac-4333-451e-baf1-754b7ff398b3 889 0 2024-11-12 20:46:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74c7666879 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74c7666879-4rdd7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibfe313825c3 [] []}} ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Namespace="calico-system" Pod="calico-kube-controllers-74c7666879-4rdd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.777 [INFO][4470] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Namespace="calico-system" Pod="calico-kube-controllers-74c7666879-4rdd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.835 [INFO][4542] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" HandleID="k8s-pod-network.08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.858 [INFO][4542] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" HandleID="k8s-pod-network.08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74c7666879-4rdd7", "timestamp":"2024-11-12 20:46:56.835712164 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.858 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.890 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.890 [INFO][4542] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.893 [INFO][4542] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" host="localhost" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.912 [INFO][4542] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.926 [INFO][4542] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.930 [INFO][4542] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.932 [INFO][4542] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.932 [INFO][4542] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" host="localhost" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.937 [INFO][4542] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.942 [INFO][4542] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" host="localhost" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.948 [INFO][4542] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" host="localhost" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.948 [INFO][4542] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" host="localhost" Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.948 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:57.005440 containerd[1586]: 2024-11-12 20:46:56.948 [INFO][4542] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" HandleID="k8s-pod-network.08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:57.007403 containerd[1586]: 2024-11-12 20:46:56.954 [INFO][4470] cni-plugin/k8s.go 386: Populated endpoint ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Namespace="calico-system" Pod="calico-kube-controllers-74c7666879-4rdd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0", GenerateName:"calico-kube-controllers-74c7666879-", Namespace:"calico-system", SelfLink:"", UID:"78ca05ac-4333-451e-baf1-754b7ff398b3", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c7666879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74c7666879-4rdd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfe313825c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:57.007403 containerd[1586]: 2024-11-12 20:46:56.956 [INFO][4470] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Namespace="calico-system" Pod="calico-kube-controllers-74c7666879-4rdd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:57.007403 containerd[1586]: 2024-11-12 20:46:56.956 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfe313825c3 ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Namespace="calico-system" Pod="calico-kube-controllers-74c7666879-4rdd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:57.007403 containerd[1586]: 2024-11-12 20:46:56.961 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Namespace="calico-system" Pod="calico-kube-controllers-74c7666879-4rdd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:57.007403 containerd[1586]: 2024-11-12 20:46:56.962 [INFO][4470] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Namespace="calico-system" Pod="calico-kube-controllers-74c7666879-4rdd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0", GenerateName:"calico-kube-controllers-74c7666879-", Namespace:"calico-system", SelfLink:"", UID:"78ca05ac-4333-451e-baf1-754b7ff398b3", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c7666879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda", Pod:"calico-kube-controllers-74c7666879-4rdd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfe313825c3", MAC:"de:49:0e:66:f6:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:57.007403 containerd[1586]: 2024-11-12 20:46:56.976 [INFO][4470] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda" Namespace="calico-system" Pod="calico-kube-controllers-74c7666879-4rdd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:46:57.037117 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:57.057612 systemd-networkd[1250]: calie5e6bfeea99: Link UP Nov 12 20:46:57.058941 systemd-networkd[1250]: calie5e6bfeea99: Gained carrier Nov 12 20:46:57.073506 containerd[1586]: time="2024-11-12T20:46:57.073340371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:57.081299 containerd[1586]: time="2024-11-12T20:46:57.079590471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:57.081299 containerd[1586]: time="2024-11-12T20:46:57.079641850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.081299 containerd[1586]: time="2024-11-12T20:46:57.079869946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.749 [INFO][4502] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.762 [INFO][4502] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0 calico-apiserver-f884fd4f8- calico-apiserver d3f327c6-ad19-4f3c-afcb-be3bdd722d7e 886 0 2024-11-12 20:46:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f884fd4f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f884fd4f8-jqrc8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5e6bfeea99 [] []}} ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-jqrc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.763 [INFO][4502] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-jqrc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.838 [INFO][4531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" HandleID="k8s-pod-network.18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.857 [INFO][4531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" HandleID="k8s-pod-network.18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360e90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f884fd4f8-jqrc8", "timestamp":"2024-11-12 20:46:56.838569495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.858 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.949 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.949 [INFO][4531] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.953 [INFO][4531] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" host="localhost" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.964 [INFO][4531] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:56.993 [INFO][4531] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:57.003 [INFO][4531] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:57.008 [INFO][4531] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:57.009 [INFO][4531] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" host="localhost" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:57.015 [INFO][4531] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330 Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:57.023 [INFO][4531] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" host="localhost" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:57.035 [INFO][4531] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" host="localhost" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:57.035 [INFO][4531] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" host="localhost" Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:57.036 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:57.082099 containerd[1586]: 2024-11-12 20:46:57.036 [INFO][4531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" HandleID="k8s-pod-network.18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:57.082739 containerd[1586]: 2024-11-12 20:46:57.050 [INFO][4502] cni-plugin/k8s.go 386: Populated endpoint ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-jqrc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0", GenerateName:"calico-apiserver-f884fd4f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f327c6-ad19-4f3c-afcb-be3bdd722d7e", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f884fd4f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f884fd4f8-jqrc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e6bfeea99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:57.082739 containerd[1586]: 2024-11-12 20:46:57.051 [INFO][4502] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-jqrc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:57.082739 containerd[1586]: 2024-11-12 20:46:57.051 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5e6bfeea99 ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-jqrc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:57.082739 containerd[1586]: 2024-11-12 20:46:57.058 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-jqrc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:57.082739 containerd[1586]: 2024-11-12 20:46:57.061 [INFO][4502] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-jqrc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0", GenerateName:"calico-apiserver-f884fd4f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f327c6-ad19-4f3c-afcb-be3bdd722d7e", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f884fd4f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330", Pod:"calico-apiserver-f884fd4f8-jqrc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e6bfeea99", MAC:"ae:b3:71:a0:9a:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:57.082739 containerd[1586]: 2024-11-12 20:46:57.076 [INFO][4502] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330" Namespace="calico-apiserver" Pod="calico-apiserver-f884fd4f8-jqrc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:46:57.092164 containerd[1586]: time="2024-11-12T20:46:57.091852035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:57.093306 containerd[1586]: time="2024-11-12T20:46:57.092438627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:57.093306 containerd[1586]: time="2024-11-12T20:46:57.092557695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.093306 containerd[1586]: time="2024-11-12T20:46:57.092719264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.105958 systemd-networkd[1250]: cali8963253065c: Link UP Nov 12 20:46:57.109801 systemd-networkd[1250]: cali8963253065c: Gained carrier Nov 12 20:46:57.127477 containerd[1586]: time="2024-11-12T20:46:57.127396052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f884fd4f8-sk7r7,Uid:685f068e-e191-4943-9495-a2b63c195079,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c\"" Nov 12 20:46:57.130164 containerd[1586]: time="2024-11-12T20:46:57.130116127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:46:57.147781 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:57.148151 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:57.169867 containerd[1586]: time="2024-11-12T20:46:57.169778234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:57.170824 containerd[1586]: time="2024-11-12T20:46:57.170703525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:57.170978 containerd[1586]: time="2024-11-12T20:46:57.170794839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.171247 containerd[1586]: time="2024-11-12T20:46:57.171189936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.195632 containerd[1586]: time="2024-11-12T20:46:57.195582973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z7dpt,Uid:3b28f423-d59c-445f-922b-3b39379c4a87,Namespace:kube-system,Attempt:1,} returns sandbox id \"2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56\"" Nov 12 20:46:57.196567 kubelet[2815]: E1112 20:46:57.196232 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:57.196981 containerd[1586]: time="2024-11-12T20:46:57.196911836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c7666879-4rdd7,Uid:78ca05ac-4333-451e-baf1-754b7ff398b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda\"" Nov 12 20:46:57.198652 containerd[1586]: time="2024-11-12T20:46:57.198611940Z" level=info msg="CreateContainer within sandbox \"2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:46:57.203200 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:57.232659 containerd[1586]: time="2024-11-12T20:46:57.232601632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f884fd4f8-jqrc8,Uid:d3f327c6-ad19-4f3c-afcb-be3bdd722d7e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330\"" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:56.775 [INFO][4489] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:56.789 [INFO][4489] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--5mwdx-eth0 coredns-76f75df574- kube-system 02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3 887 0 2024-11-12 20:46:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-5mwdx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8963253065c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Namespace="kube-system" Pod="coredns-76f75df574-5mwdx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5mwdx-" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:56.789 [INFO][4489] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Namespace="kube-system" Pod="coredns-76f75df574-5mwdx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:56.865 [INFO][4552] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" HandleID="k8s-pod-network.0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:56.878 [INFO][4552] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" HandleID="k8s-pod-network.0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035db80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-5mwdx", "timestamp":"2024-11-12 20:46:56.865501058 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:56.879 [INFO][4552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.036 [INFO][4552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.036 [INFO][4552] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.040 [INFO][4552] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" host="localhost" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.048 [INFO][4552] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.055 [INFO][4552] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.061 [INFO][4552] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.070 [INFO][4552] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.070 [INFO][4552] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" host="localhost" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.076 [INFO][4552] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.083 [INFO][4552] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" host="localhost" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.092 [INFO][4552] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" host="localhost" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.092 [INFO][4552] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" host="localhost" Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.092 [INFO][4552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:57.295919 containerd[1586]: 2024-11-12 20:46:57.092 [INFO][4552] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" HandleID="k8s-pod-network.0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:57.296678 containerd[1586]: 2024-11-12 20:46:57.099 [INFO][4489] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Namespace="kube-system" Pod="coredns-76f75df574-5mwdx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5mwdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--5mwdx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-5mwdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8963253065c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:57.296678 containerd[1586]: 2024-11-12 20:46:57.099 [INFO][4489] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Namespace="kube-system" Pod="coredns-76f75df574-5mwdx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:57.296678 containerd[1586]: 2024-11-12 20:46:57.099 [INFO][4489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8963253065c ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Namespace="kube-system" Pod="coredns-76f75df574-5mwdx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:57.296678 containerd[1586]: 2024-11-12 20:46:57.111 [INFO][4489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Namespace="kube-system" Pod="coredns-76f75df574-5mwdx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:57.296678 containerd[1586]: 2024-11-12 20:46:57.112 [INFO][4489] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Namespace="kube-system" Pod="coredns-76f75df574-5mwdx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5mwdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--5mwdx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de", Pod:"coredns-76f75df574-5mwdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8963253065c", MAC:"aa:f7:01:92:f0:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:57.296678 containerd[1586]: 2024-11-12 20:46:57.291 [INFO][4489] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de" Namespace="kube-system" Pod="coredns-76f75df574-5mwdx" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:46:57.300908 containerd[1586]: time="2024-11-12T20:46:57.300477869Z" level=info msg="StopPodSandbox for \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\"" Nov 12 20:46:57.402049 containerd[1586]: time="2024-11-12T20:46:57.401708243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:57.402049 containerd[1586]: time="2024-11-12T20:46:57.401798326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:57.405141 containerd[1586]: time="2024-11-12T20:46:57.402628644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.405141 containerd[1586]: time="2024-11-12T20:46:57.402846371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.409784 containerd[1586]: time="2024-11-12T20:46:57.409477791Z" level=info msg="CreateContainer within sandbox \"2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7db262f70c1b4935537ff23b54e79ac834cfff4d99a9e0fc38918489acc0a975\"" Nov 12 20:46:57.411016 containerd[1586]: time="2024-11-12T20:46:57.410545244Z" level=info msg="StartContainer for \"7db262f70c1b4935537ff23b54e79ac834cfff4d99a9e0fc38918489acc0a975\"" Nov 12 20:46:57.461723 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.397 [INFO][4840] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.397 [INFO][4840] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" iface="eth0" netns="/var/run/netns/cni-361e9a26-54b3-702f-d2b6-897c894813a4" Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.399 [INFO][4840] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" iface="eth0" netns="/var/run/netns/cni-361e9a26-54b3-702f-d2b6-897c894813a4" Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.399 [INFO][4840] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" iface="eth0" netns="/var/run/netns/cni-361e9a26-54b3-702f-d2b6-897c894813a4" Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.399 [INFO][4840] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.399 [INFO][4840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.442 [INFO][4867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" HandleID="k8s-pod-network.f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.442 [INFO][4867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.442 [INFO][4867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.455 [WARNING][4867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" HandleID="k8s-pod-network.f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.456 [INFO][4867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" HandleID="k8s-pod-network.f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.462 [INFO][4867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:57.477960 containerd[1586]: 2024-11-12 20:46:57.470 [INFO][4840] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:46:57.478778 containerd[1586]: time="2024-11-12T20:46:57.478743507Z" level=info msg="TearDown network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\" successfully" Nov 12 20:46:57.478878 containerd[1586]: time="2024-11-12T20:46:57.478860853Z" level=info msg="StopPodSandbox for \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\" returns successfully" Nov 12 20:46:57.480254 containerd[1586]: time="2024-11-12T20:46:57.480193222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pn8fl,Uid:71ac2d0f-163c-4690-9604-80f6d13fee6e,Namespace:calico-system,Attempt:1,}" Nov 12 20:46:57.507563 containerd[1586]: time="2024-11-12T20:46:57.507488193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5mwdx,Uid:02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3,Namespace:kube-system,Attempt:1,} returns sandbox id \"0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de\"" Nov 12 20:46:57.508888 kubelet[2815]: E1112 20:46:57.508739 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:57.516788 containerd[1586]: time="2024-11-12T20:46:57.516585131Z" level=info msg="CreateContainer within sandbox \"0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:46:57.538908 containerd[1586]: time="2024-11-12T20:46:57.538837613Z" level=info msg="StartContainer for \"7db262f70c1b4935537ff23b54e79ac834cfff4d99a9e0fc38918489acc0a975\" returns successfully" Nov 12 20:46:57.570545 kernel: bpftool[4962]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:46:57.580920 containerd[1586]: time="2024-11-12T20:46:57.580822476Z" level=info msg="CreateContainer within sandbox \"0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5006c394248d67af96dd955da2c4384fa3a65766f9b80d95df8581cc444fbba\"" Nov 12 20:46:57.582486 containerd[1586]: time="2024-11-12T20:46:57.582409262Z" level=info msg="StartContainer for \"f5006c394248d67af96dd955da2c4384fa3a65766f9b80d95df8581cc444fbba\"" Nov 12 20:46:57.687928 containerd[1586]: time="2024-11-12T20:46:57.687753578Z" level=info msg="StartContainer for \"f5006c394248d67af96dd955da2c4384fa3a65766f9b80d95df8581cc444fbba\" returns successfully" Nov 12 20:46:57.691176 systemd[1]: run-netns-cni\x2d361e9a26\x2d54b3\x2d702f\x2dd2b6\x2d897c894813a4.mount: Deactivated successfully. Nov 12 20:46:57.710015 systemd-networkd[1250]: calidba0e0f2565: Link UP Nov 12 20:46:57.710390 systemd-networkd[1250]: calidba0e0f2565: Gained carrier Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.565 [INFO][4937] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pn8fl-eth0 csi-node-driver- calico-system 71ac2d0f-163c-4690-9604-80f6d13fee6e 929 0 2024-11-12 20:46:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pn8fl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidba0e0f2565 [] []}} ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Namespace="calico-system" Pod="csi-node-driver-pn8fl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pn8fl-" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.565 [INFO][4937] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Namespace="calico-system" Pod="csi-node-driver-pn8fl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.617 [INFO][4964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" HandleID="k8s-pod-network.09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.635 [INFO][4964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" HandleID="k8s-pod-network.09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e3cf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pn8fl", "timestamp":"2024-11-12 20:46:57.617024782 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.635 [INFO][4964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.636 [INFO][4964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.636 [INFO][4964] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.642 [INFO][4964] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" host="localhost" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.648 [INFO][4964] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.654 [INFO][4964] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.656 [INFO][4964] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.659 [INFO][4964] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.659 [INFO][4964] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" host="localhost" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.661 [INFO][4964] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54 Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.669 [INFO][4964] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" host="localhost" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.681 [INFO][4964] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" host="localhost" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.681 [INFO][4964] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" host="localhost" Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.681 [INFO][4964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:57.742060 containerd[1586]: 2024-11-12 20:46:57.682 [INFO][4964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" HandleID="k8s-pod-network.09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.742730 containerd[1586]: 2024-11-12 20:46:57.702 [INFO][4937] cni-plugin/k8s.go 386: Populated endpoint ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Namespace="calico-system" Pod="csi-node-driver-pn8fl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pn8fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pn8fl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71ac2d0f-163c-4690-9604-80f6d13fee6e", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pn8fl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba0e0f2565", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:57.742730 containerd[1586]: 2024-11-12 20:46:57.702 [INFO][4937] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Namespace="calico-system" Pod="csi-node-driver-pn8fl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.742730 containerd[1586]: 2024-11-12 20:46:57.702 [INFO][4937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidba0e0f2565 ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Namespace="calico-system" Pod="csi-node-driver-pn8fl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.742730 containerd[1586]: 2024-11-12 20:46:57.712 [INFO][4937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Namespace="calico-system" Pod="csi-node-driver-pn8fl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.742730 containerd[1586]: 2024-11-12 20:46:57.714 [INFO][4937] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Namespace="calico-system" Pod="csi-node-driver-pn8fl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pn8fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pn8fl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71ac2d0f-163c-4690-9604-80f6d13fee6e", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54", Pod:"csi-node-driver-pn8fl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba0e0f2565", MAC:"3a:84:62:1f:30:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:57.742730 containerd[1586]: 2024-11-12 20:46:57.735 [INFO][4937] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54" Namespace="calico-system" Pod="csi-node-driver-pn8fl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:46:57.901135 systemd-networkd[1250]: vxlan.calico: Link UP Nov 12 20:46:57.901149 systemd-networkd[1250]: vxlan.calico: Gained carrier Nov 12 20:46:57.949602 containerd[1586]: time="2024-11-12T20:46:57.948801358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:57.949602 containerd[1586]: time="2024-11-12T20:46:57.948878856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:57.949602 containerd[1586]: time="2024-11-12T20:46:57.948904414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.949602 containerd[1586]: time="2024-11-12T20:46:57.949055744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:57.998866 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:58.020508 containerd[1586]: time="2024-11-12T20:46:58.019735027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pn8fl,Uid:71ac2d0f-163c-4690-9604-80f6d13fee6e,Namespace:calico-system,Attempt:1,} returns sandbox id \"09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54\"" Nov 12 20:46:58.061905 systemd-networkd[1250]: califf34b2f8be5: Gained IPv6LL Nov 12 20:46:58.501808 kubelet[2815]: E1112 20:46:58.500378 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:58.503225 kubelet[2815]: E1112 20:46:58.502129 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:58.509630 systemd-networkd[1250]: cali8963253065c: Gained IPv6LL Nov 12 20:46:58.530630 kubelet[2815]: I1112 20:46:58.530534 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-z7dpt" podStartSLOduration=34.530422296 podStartE2EDuration="34.530422296s" podCreationTimestamp="2024-11-12 20:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:58.513232399 +0000 UTC m=+46.321101580" watchObservedRunningTime="2024-11-12 20:46:58.530422296 +0000 UTC m=+46.338291477" Nov 12 20:46:58.531508 kubelet[2815]: I1112 20:46:58.531159 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5mwdx" podStartSLOduration=34.531133696 podStartE2EDuration="34.531133696s" podCreationTimestamp="2024-11-12 20:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:58.529826898 +0000 UTC m=+46.337696079" watchObservedRunningTime="2024-11-12 20:46:58.531133696 +0000 UTC m=+46.339002877" Nov 12 20:46:58.701710 systemd-networkd[1250]: calibfe313825c3: Gained IPv6LL Nov 12 20:46:58.764873 systemd-networkd[1250]: calidba0e0f2565: Gained IPv6LL Nov 12 20:46:58.829871 systemd-networkd[1250]: cali80aa84aadee: Gained IPv6LL Nov 12 20:46:58.956692 systemd-networkd[1250]: calie5e6bfeea99: Gained IPv6LL Nov 12 20:46:59.063899 systemd[1]: Started sshd@10-10.0.0.56:22-10.0.0.1:59394.service - OpenSSH per-connection server daemon (10.0.0.1:59394). Nov 12 20:46:59.112175 sshd[5156]: Accepted publickey for core from 10.0.0.1 port 59394 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:59.115690 sshd[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:59.121148 systemd-logind[1564]: New session 11 of user core. Nov 12 20:46:59.127979 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:46:59.212846 systemd-networkd[1250]: vxlan.calico: Gained IPv6LL Nov 12 20:46:59.281939 sshd[5156]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:59.287852 systemd[1]: Started sshd@11-10.0.0.56:22-10.0.0.1:59406.service - OpenSSH per-connection server daemon (10.0.0.1:59406). Nov 12 20:46:59.288433 systemd[1]: sshd@10-10.0.0.56:22-10.0.0.1:59394.service: Deactivated successfully. Nov 12 20:46:59.296112 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:46:59.296319 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:46:59.299847 systemd-logind[1564]: Removed session 11. Nov 12 20:46:59.329139 sshd[5175]: Accepted publickey for core from 10.0.0.1 port 59406 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:59.331176 sshd[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:59.337087 systemd-logind[1564]: New session 12 of user core. Nov 12 20:46:59.344200 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:46:59.507573 kubelet[2815]: E1112 20:46:59.507526 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:59.508438 kubelet[2815]: E1112 20:46:59.507649 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:59.544028 sshd[5175]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:59.556006 systemd[1]: Started sshd@12-10.0.0.56:22-10.0.0.1:59418.service - OpenSSH per-connection server daemon (10.0.0.1:59418). Nov 12 20:46:59.562355 systemd[1]: sshd@11-10.0.0.56:22-10.0.0.1:59406.service: Deactivated successfully. Nov 12 20:46:59.577590 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:46:59.579672 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:46:59.581875 systemd-logind[1564]: Removed session 12. Nov 12 20:46:59.618962 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 59418 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:59.621366 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:59.627892 systemd-logind[1564]: New session 13 of user core. Nov 12 20:46:59.634969 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:47:00.059744 sshd[5189]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:00.065372 systemd[1]: sshd@12-10.0.0.56:22-10.0.0.1:59418.service: Deactivated successfully. Nov 12 20:47:00.068738 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:47:00.069559 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:47:00.071422 systemd-logind[1564]: Removed session 13. Nov 12 20:47:00.108369 containerd[1586]: time="2024-11-12T20:47:00.108303645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:00.109216 containerd[1586]: time="2024-11-12T20:47:00.108877140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:47:00.110288 containerd[1586]: time="2024-11-12T20:47:00.110233643Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:00.123399 containerd[1586]: time="2024-11-12T20:47:00.123325739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:00.124564 containerd[1586]: time="2024-11-12T20:47:00.124496336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.994324432s" Nov 12 20:47:00.124564 containerd[1586]: time="2024-11-12T20:47:00.124554868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:47:00.125638 containerd[1586]: time="2024-11-12T20:47:00.125235518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:47:00.127293 containerd[1586]: time="2024-11-12T20:47:00.127227555Z" level=info msg="CreateContainer within sandbox \"fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:47:00.141089 containerd[1586]: time="2024-11-12T20:47:00.141015751Z" level=info msg="CreateContainer within sandbox \"fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"670912f780d6cd85117555b9c3204e0368e7743a344f619d0db87fb1e0923639\"" Nov 12 20:47:00.142228 containerd[1586]: time="2024-11-12T20:47:00.142162021Z" level=info msg="StartContainer for \"670912f780d6cd85117555b9c3204e0368e7743a344f619d0db87fb1e0923639\"" Nov 12 20:47:00.224985 containerd[1586]: time="2024-11-12T20:47:00.224933222Z" level=info msg="StartContainer for \"670912f780d6cd85117555b9c3204e0368e7743a344f619d0db87fb1e0923639\" returns successfully" Nov 12 20:47:00.510639 kubelet[2815]: E1112 20:47:00.510604 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:47:00.511689 kubelet[2815]: E1112 20:47:00.511664 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:47:00.521295 kubelet[2815]: I1112 20:47:00.521245 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f884fd4f8-sk7r7" podStartSLOduration=27.525270117 podStartE2EDuration="30.521192596s" podCreationTimestamp="2024-11-12 20:46:30 +0000 UTC" firstStartedPulling="2024-11-12 20:46:57.128993749 +0000 UTC m=+44.936862930" lastFinishedPulling="2024-11-12 20:47:00.124916228 +0000 UTC m=+47.932785409" observedRunningTime="2024-11-12 20:47:00.520536894 +0000 UTC m=+48.328406075" watchObservedRunningTime="2024-11-12 20:47:00.521192596 +0000 UTC m=+48.329061797" Nov 12 20:47:01.231654 kubelet[2815]: E1112 20:47:01.231595 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:47:01.512366 kubelet[2815]: I1112 20:47:01.512194 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:47:02.390327 containerd[1586]: time="2024-11-12T20:47:02.390262456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:02.391403 containerd[1586]: time="2024-11-12T20:47:02.391192501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:47:02.392818 containerd[1586]: time="2024-11-12T20:47:02.392712894Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:02.395079 containerd[1586]: time="2024-11-12T20:47:02.395010060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:02.395719 containerd[1586]: time="2024-11-12T20:47:02.395690388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.270418772s" Nov 12 20:47:02.395719 containerd[1586]: time="2024-11-12T20:47:02.395721057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:47:02.396285 containerd[1586]: time="2024-11-12T20:47:02.396254956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:47:02.412651 containerd[1586]: time="2024-11-12T20:47:02.412493858Z" level=info msg="CreateContainer within sandbox \"08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:47:02.428062 containerd[1586]: time="2024-11-12T20:47:02.427993287Z" level=info msg="CreateContainer within sandbox \"08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f7701b42373b5b7f339bf2c775f4d060c455b6723bac00dd36b549daa8fb2086\"" Nov 12 20:47:02.428774 containerd[1586]: time="2024-11-12T20:47:02.428744211Z" level=info msg="StartContainer for \"f7701b42373b5b7f339bf2c775f4d060c455b6723bac00dd36b549daa8fb2086\"" Nov 12 20:47:02.580843 containerd[1586]: time="2024-11-12T20:47:02.580786715Z" level=info msg="StartContainer for \"f7701b42373b5b7f339bf2c775f4d060c455b6723bac00dd36b549daa8fb2086\" returns successfully" Nov 12 20:47:02.738661 kubelet[2815]: I1112 20:47:02.736524 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74c7666879-4rdd7" podStartSLOduration=27.539075589 podStartE2EDuration="32.73648045s" podCreationTimestamp="2024-11-12 20:46:30 +0000 UTC" firstStartedPulling="2024-11-12 20:46:57.198638962 +0000 UTC m=+45.006508143" lastFinishedPulling="2024-11-12 20:47:02.396043823 +0000 UTC m=+50.203913004" observedRunningTime="2024-11-12 20:47:02.736109642 +0000 UTC m=+50.543978823" watchObservedRunningTime="2024-11-12 20:47:02.73648045 +0000 UTC m=+50.544349631" Nov 12 20:47:03.040565 containerd[1586]: time="2024-11-12T20:47:03.040383686Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:03.041787 containerd[1586]: time="2024-11-12T20:47:03.041736017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:47:03.043850 containerd[1586]: time="2024-11-12T20:47:03.043815235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 647.529961ms" Nov 12 20:47:03.043850 containerd[1586]: time="2024-11-12T20:47:03.043849471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:47:03.044754 containerd[1586]: time="2024-11-12T20:47:03.044440829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:47:03.045958 containerd[1586]: time="2024-11-12T20:47:03.045928247Z" level=info msg="CreateContainer within sandbox \"18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:47:03.060701 containerd[1586]: time="2024-11-12T20:47:03.060606106Z" level=info msg="CreateContainer within sandbox \"18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"92da993aef91d65f1fed1b646c2ab3b354afe957015c897233e6e753c6cccae1\"" Nov 12 20:47:03.061351 containerd[1586]: time="2024-11-12T20:47:03.061269883Z" level=info msg="StartContainer for \"92da993aef91d65f1fed1b646c2ab3b354afe957015c897233e6e753c6cccae1\"" Nov 12 20:47:03.277217 containerd[1586]: time="2024-11-12T20:47:03.276099438Z" level=info msg="StartContainer for \"92da993aef91d65f1fed1b646c2ab3b354afe957015c897233e6e753c6cccae1\" returns successfully" Nov 12 20:47:03.601874 kubelet[2815]: I1112 20:47:03.601283 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f884fd4f8-jqrc8" podStartSLOduration=27.791164163 podStartE2EDuration="33.601215691s" podCreationTimestamp="2024-11-12 20:46:30 +0000 UTC" firstStartedPulling="2024-11-12 20:46:57.234146889 +0000 UTC m=+45.042016070" lastFinishedPulling="2024-11-12 20:47:03.044198417 +0000 UTC m=+50.852067598" observedRunningTime="2024-11-12 20:47:03.600647777 +0000 UTC m=+51.408516948" watchObservedRunningTime="2024-11-12 20:47:03.601215691 +0000 UTC m=+51.409084872" Nov 12 20:47:04.426751 containerd[1586]: time="2024-11-12T20:47:04.426688957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:04.427527 containerd[1586]: time="2024-11-12T20:47:04.427491578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:47:04.428706 containerd[1586]: time="2024-11-12T20:47:04.428677440Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:04.431201 containerd[1586]: time="2024-11-12T20:47:04.431136941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:04.431975 containerd[1586]: time="2024-11-12T20:47:04.431928692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.387433468s" Nov 12 20:47:04.432049 containerd[1586]: time="2024-11-12T20:47:04.431972916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:47:04.433897 containerd[1586]: time="2024-11-12T20:47:04.433860236Z" level=info msg="CreateContainer within sandbox \"09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:47:04.471832 containerd[1586]: time="2024-11-12T20:47:04.471760801Z" level=info msg="CreateContainer within sandbox \"09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b39acbcd89aecf131aeda17254509b5cac1966f7ef00a69b54be055d8cf78267\"" Nov 12 20:47:04.472513 containerd[1586]: time="2024-11-12T20:47:04.472420078Z" level=info msg="StartContainer for \"b39acbcd89aecf131aeda17254509b5cac1966f7ef00a69b54be055d8cf78267\"" Nov 12 20:47:04.544501 containerd[1586]: time="2024-11-12T20:47:04.544436469Z" level=info msg="StartContainer for \"b39acbcd89aecf131aeda17254509b5cac1966f7ef00a69b54be055d8cf78267\" returns successfully" Nov 12 20:47:04.546103 containerd[1586]: time="2024-11-12T20:47:04.546046771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:47:05.072792 systemd[1]: Started sshd@13-10.0.0.56:22-10.0.0.1:59426.service - OpenSSH per-connection server daemon (10.0.0.1:59426). Nov 12 20:47:05.115945 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 59426 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:05.117826 sshd[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:05.125258 systemd-logind[1564]: New session 14 of user core. Nov 12 20:47:05.135841 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:47:05.274056 sshd[5424]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:05.278948 systemd[1]: sshd@13-10.0.0.56:22-10.0.0.1:59426.service: Deactivated successfully. Nov 12 20:47:05.282204 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:47:05.283028 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:47:05.284118 systemd-logind[1564]: Removed session 14. Nov 12 20:47:06.561073 containerd[1586]: time="2024-11-12T20:47:06.560974137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:06.567311 containerd[1586]: time="2024-11-12T20:47:06.567203269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:47:06.568714 containerd[1586]: time="2024-11-12T20:47:06.568629477Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:06.571521 containerd[1586]: time="2024-11-12T20:47:06.571430756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:06.574469 containerd[1586]: time="2024-11-12T20:47:06.572372882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 2.02627771s" Nov 12 20:47:06.574469 containerd[1586]: time="2024-11-12T20:47:06.572423820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:47:06.577397 containerd[1586]: time="2024-11-12T20:47:06.577340761Z" level=info msg="CreateContainer within sandbox \"09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:47:06.608974 containerd[1586]: time="2024-11-12T20:47:06.608916424Z" level=info msg="CreateContainer within sandbox \"09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fd7b924133abf2a335cd4030729988eb6a72770ad33593eab7d7e33b5dda6456\"" Nov 12 20:47:06.609592 containerd[1586]: time="2024-11-12T20:47:06.609565621Z" level=info msg="StartContainer for \"fd7b924133abf2a335cd4030729988eb6a72770ad33593eab7d7e33b5dda6456\"" Nov 12 20:47:06.648162 systemd[1]: run-containerd-runc-k8s.io-fd7b924133abf2a335cd4030729988eb6a72770ad33593eab7d7e33b5dda6456-runc.minga3.mount: Deactivated successfully. Nov 12 20:47:06.686927 containerd[1586]: time="2024-11-12T20:47:06.686877185Z" level=info msg="StartContainer for \"fd7b924133abf2a335cd4030729988eb6a72770ad33593eab7d7e33b5dda6456\" returns successfully" Nov 12 20:47:07.387984 kubelet[2815]: I1112 20:47:07.387937 2815 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:47:07.389669 kubelet[2815]: I1112 20:47:07.389648 2815 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:47:07.769633 kubelet[2815]: I1112 20:47:07.768962 2815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-pn8fl" podStartSLOduration=29.217666564 podStartE2EDuration="37.768905465s" podCreationTimestamp="2024-11-12 20:46:30 +0000 UTC" firstStartedPulling="2024-11-12 20:46:58.022433849 +0000 UTC m=+45.830303030" lastFinishedPulling="2024-11-12 20:47:06.57367275 +0000 UTC m=+54.381541931" observedRunningTime="2024-11-12 20:47:07.768760509 +0000 UTC m=+55.576629690" watchObservedRunningTime="2024-11-12 20:47:07.768905465 +0000 UTC m=+55.576774646" Nov 12 20:47:10.304531 systemd[1]: Started sshd@14-10.0.0.56:22-10.0.0.1:60698.service - OpenSSH per-connection server daemon (10.0.0.1:60698). Nov 12 20:47:10.385347 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 60698 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:10.389027 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:10.399944 systemd-logind[1564]: New session 15 of user core. Nov 12 20:47:10.408825 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:47:10.761053 sshd[5493]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:10.773051 systemd[1]: sshd@14-10.0.0.56:22-10.0.0.1:60698.service: Deactivated successfully. Nov 12 20:47:10.783393 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:47:10.784753 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:47:10.786682 systemd-logind[1564]: Removed session 15. Nov 12 20:47:12.294411 containerd[1586]: time="2024-11-12T20:47:12.294311679Z" level=info msg="StopPodSandbox for \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\"" Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.344 [WARNING][5523] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z7dpt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3b28f423-d59c-445f-922b-3b39379c4a87", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56", Pod:"coredns-76f75df574-z7dpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf34b2f8be5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.344 [INFO][5523] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.344 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" iface="eth0" netns="" Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.344 [INFO][5523] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.344 [INFO][5523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.374 [INFO][5532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" HandleID="k8s-pod-network.870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.374 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.374 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.382 [WARNING][5532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" HandleID="k8s-pod-network.870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.382 [INFO][5532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" HandleID="k8s-pod-network.870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.383 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:12.390582 containerd[1586]: 2024-11-12 20:47:12.386 [INFO][5523] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:47:12.391142 containerd[1586]: time="2024-11-12T20:47:12.390644273Z" level=info msg="TearDown network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\" successfully" Nov 12 20:47:12.391142 containerd[1586]: time="2024-11-12T20:47:12.390681984Z" level=info msg="StopPodSandbox for \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\" returns successfully" Nov 12 20:47:12.398623 containerd[1586]: time="2024-11-12T20:47:12.398570007Z" level=info msg="RemovePodSandbox for \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\"" Nov 12 20:47:12.400902 containerd[1586]: time="2024-11-12T20:47:12.400855214Z" level=info msg="Forcibly stopping sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\"" Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.443 [WARNING][5554] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z7dpt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3b28f423-d59c-445f-922b-3b39379c4a87", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ecece7b6a1cebbd05c9c5ad607cecf3711c29c1c73a28840b32ca40bf82ad56", Pod:"coredns-76f75df574-z7dpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf34b2f8be5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.443 [INFO][5554] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.443 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" iface="eth0" netns="" Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.443 [INFO][5554] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.443 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.471 [INFO][5561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" HandleID="k8s-pod-network.870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.471 [INFO][5561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.471 [INFO][5561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.477 [WARNING][5561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" HandleID="k8s-pod-network.870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.477 [INFO][5561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" HandleID="k8s-pod-network.870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Workload="localhost-k8s-coredns--76f75df574--z7dpt-eth0" Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.479 [INFO][5561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:12.485050 containerd[1586]: 2024-11-12 20:47:12.482 [INFO][5554] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e" Nov 12 20:47:12.485597 containerd[1586]: time="2024-11-12T20:47:12.485103352Z" level=info msg="TearDown network for sandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\" successfully" Nov 12 20:47:12.504636 containerd[1586]: time="2024-11-12T20:47:12.504560299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:12.504805 containerd[1586]: time="2024-11-12T20:47:12.504674126Z" level=info msg="RemovePodSandbox \"870f924f67e1723da5308adfd8ebe5a4f20f13f49a85cacab275dad7f5d4000e\" returns successfully" Nov 12 20:47:12.505550 containerd[1586]: time="2024-11-12T20:47:12.505501700Z" level=info msg="StopPodSandbox for \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\"" Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.544 [WARNING][5584] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pn8fl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71ac2d0f-163c-4690-9604-80f6d13fee6e", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54", Pod:"csi-node-driver-pn8fl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba0e0f2565", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.545 [INFO][5584] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.545 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" iface="eth0" netns="" Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.545 [INFO][5584] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.545 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.578 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" HandleID="k8s-pod-network.f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.578 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.578 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.585 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" HandleID="k8s-pod-network.f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.585 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" HandleID="k8s-pod-network.f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.587 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:12.593622 containerd[1586]: 2024-11-12 20:47:12.589 [INFO][5584] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:47:12.593622 containerd[1586]: time="2024-11-12T20:47:12.592638222Z" level=info msg="TearDown network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\" successfully" Nov 12 20:47:12.593622 containerd[1586]: time="2024-11-12T20:47:12.592667308Z" level=info msg="StopPodSandbox for \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\" returns successfully" Nov 12 20:47:12.593622 containerd[1586]: time="2024-11-12T20:47:12.593270345Z" level=info msg="RemovePodSandbox for \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\"" Nov 12 20:47:12.593622 containerd[1586]: time="2024-11-12T20:47:12.593305241Z" level=info msg="Forcibly stopping sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\"" Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.633 [WARNING][5616] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pn8fl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71ac2d0f-163c-4690-9604-80f6d13fee6e", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09e034ad27200513437a654ef81e0e8d67c11d60eb7094d0526b21a39b6ebb54", Pod:"csi-node-driver-pn8fl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba0e0f2565", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.634 [INFO][5616] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.634 [INFO][5616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" iface="eth0" netns="" Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.634 [INFO][5616] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.634 [INFO][5616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.660 [INFO][5623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" HandleID="k8s-pod-network.f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.660 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.660 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.666 [WARNING][5623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" HandleID="k8s-pod-network.f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.666 [INFO][5623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" HandleID="k8s-pod-network.f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Workload="localhost-k8s-csi--node--driver--pn8fl-eth0" Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.668 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:12.673648 containerd[1586]: 2024-11-12 20:47:12.670 [INFO][5616] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4" Nov 12 20:47:12.674154 containerd[1586]: time="2024-11-12T20:47:12.673679281Z" level=info msg="TearDown network for sandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\" successfully" Nov 12 20:47:12.678303 containerd[1586]: time="2024-11-12T20:47:12.678257659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:12.678376 containerd[1586]: time="2024-11-12T20:47:12.678346138Z" level=info msg="RemovePodSandbox \"f3212d9da3f0fae188a3da065c6ee84d0638d4a15b29b8b4bb9d13fd88c0fea4\" returns successfully" Nov 12 20:47:12.678850 containerd[1586]: time="2024-11-12T20:47:12.678799219Z" level=info msg="StopPodSandbox for \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\"" Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.726 [WARNING][5645] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0", GenerateName:"calico-apiserver-f884fd4f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f327c6-ad19-4f3c-afcb-be3bdd722d7e", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f884fd4f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330", Pod:"calico-apiserver-f884fd4f8-jqrc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e6bfeea99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.726 [INFO][5645] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.726 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" iface="eth0" netns="" Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.726 [INFO][5645] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.726 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.752 [INFO][5653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" HandleID="k8s-pod-network.e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.752 [INFO][5653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.752 [INFO][5653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.757 [WARNING][5653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" HandleID="k8s-pod-network.e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.757 [INFO][5653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" HandleID="k8s-pod-network.e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.758 [INFO][5653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:12.764823 containerd[1586]: 2024-11-12 20:47:12.761 [INFO][5645] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:47:12.765290 containerd[1586]: time="2024-11-12T20:47:12.764858232Z" level=info msg="TearDown network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\" successfully" Nov 12 20:47:12.765290 containerd[1586]: time="2024-11-12T20:47:12.764891556Z" level=info msg="StopPodSandbox for \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\" returns successfully" Nov 12 20:47:12.765464 containerd[1586]: time="2024-11-12T20:47:12.765426503Z" level=info msg="RemovePodSandbox for \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\"" Nov 12 20:47:12.765532 containerd[1586]: time="2024-11-12T20:47:12.765480375Z" level=info msg="Forcibly stopping sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\"" Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.805 [WARNING][5676] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0", GenerateName:"calico-apiserver-f884fd4f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f327c6-ad19-4f3c-afcb-be3bdd722d7e", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f884fd4f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18d8b88067d444c6da78a1f5796026d1b85889cbb4aab6685d8af949da028330", Pod:"calico-apiserver-f884fd4f8-jqrc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e6bfeea99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.805 [INFO][5676] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.805 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" iface="eth0" netns="" Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.805 [INFO][5676] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.805 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.829 [INFO][5684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" HandleID="k8s-pod-network.e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.829 [INFO][5684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.829 [INFO][5684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.834 [WARNING][5684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" HandleID="k8s-pod-network.e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.834 [INFO][5684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" HandleID="k8s-pod-network.e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Workload="localhost-k8s-calico--apiserver--f884fd4f8--jqrc8-eth0" Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.836 [INFO][5684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:12.841534 containerd[1586]: 2024-11-12 20:47:12.838 [INFO][5676] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7" Nov 12 20:47:12.842024 containerd[1586]: time="2024-11-12T20:47:12.841550289Z" level=info msg="TearDown network for sandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\" successfully" Nov 12 20:47:12.846043 containerd[1586]: time="2024-11-12T20:47:12.845942854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:12.846043 containerd[1586]: time="2024-11-12T20:47:12.846011304Z" level=info msg="RemovePodSandbox \"e693f0b773fa08a0baee5b2805f7d343059fa33f37580479ab0ff9c332e18df7\" returns successfully" Nov 12 20:47:12.846599 containerd[1586]: time="2024-11-12T20:47:12.846568263Z" level=info msg="StopPodSandbox for \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\"" Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.886 [WARNING][5707] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0", GenerateName:"calico-apiserver-f884fd4f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"685f068e-e191-4943-9495-a2b63c195079", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f884fd4f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c", Pod:"calico-apiserver-f884fd4f8-sk7r7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80aa84aadee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.886 [INFO][5707] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.886 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" iface="eth0" netns="" Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.886 [INFO][5707] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.886 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.927 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" HandleID="k8s-pod-network.db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.928 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.928 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.933 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" HandleID="k8s-pod-network.db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.933 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" HandleID="k8s-pod-network.db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.934 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:12.940002 containerd[1586]: 2024-11-12 20:47:12.937 [INFO][5707] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:47:12.940525 containerd[1586]: time="2024-11-12T20:47:12.940041247Z" level=info msg="TearDown network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\" successfully" Nov 12 20:47:12.940525 containerd[1586]: time="2024-11-12T20:47:12.940078397Z" level=info msg="StopPodSandbox for \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\" returns successfully" Nov 12 20:47:12.940908 containerd[1586]: time="2024-11-12T20:47:12.940665645Z" level=info msg="RemovePodSandbox for \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\"" Nov 12 20:47:12.940908 containerd[1586]: time="2024-11-12T20:47:12.940693057Z" level=info msg="Forcibly stopping sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\"" Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:12.981 [WARNING][5736] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0", GenerateName:"calico-apiserver-f884fd4f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"685f068e-e191-4943-9495-a2b63c195079", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f884fd4f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fdf1d25aee0f35d39372603aadf740ab5beb39ba7c6aa22ded5806a3d8d5e86c", Pod:"calico-apiserver-f884fd4f8-sk7r7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80aa84aadee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:12.981 [INFO][5736] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:12.981 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" iface="eth0" netns="" Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:12.981 [INFO][5736] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:12.981 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:13.007 [INFO][5744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" HandleID="k8s-pod-network.db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:13.007 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:13.007 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:13.013 [WARNING][5744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" HandleID="k8s-pod-network.db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:13.013 [INFO][5744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" HandleID="k8s-pod-network.db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Workload="localhost-k8s-calico--apiserver--f884fd4f8--sk7r7-eth0" Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:13.015 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:13.020889 containerd[1586]: 2024-11-12 20:47:13.017 [INFO][5736] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef" Nov 12 20:47:13.021528 containerd[1586]: time="2024-11-12T20:47:13.020934894Z" level=info msg="TearDown network for sandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\" successfully" Nov 12 20:47:13.028563 containerd[1586]: time="2024-11-12T20:47:13.028512292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:13.028643 containerd[1586]: time="2024-11-12T20:47:13.028615177Z" level=info msg="RemovePodSandbox \"db90c776a56d3c49d8d6631dacc6181c79f7a602b616c9078aab6fa54d56c4ef\" returns successfully" Nov 12 20:47:13.029390 containerd[1586]: time="2024-11-12T20:47:13.029266195Z" level=info msg="StopPodSandbox for \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\"" Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.070 [WARNING][5766] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0", GenerateName:"calico-kube-controllers-74c7666879-", Namespace:"calico-system", SelfLink:"", UID:"78ca05ac-4333-451e-baf1-754b7ff398b3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c7666879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda", Pod:"calico-kube-controllers-74c7666879-4rdd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfe313825c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.071 [INFO][5766] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.071 [INFO][5766] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" iface="eth0" netns="" Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.071 [INFO][5766] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.071 [INFO][5766] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.098 [INFO][5774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" HandleID="k8s-pod-network.af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.099 [INFO][5774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.099 [INFO][5774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.108 [WARNING][5774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" HandleID="k8s-pod-network.af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.108 [INFO][5774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" HandleID="k8s-pod-network.af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.110 [INFO][5774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:13.117121 containerd[1586]: 2024-11-12 20:47:13.114 [INFO][5766] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:47:13.117121 containerd[1586]: time="2024-11-12T20:47:13.117079683Z" level=info msg="TearDown network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\" successfully" Nov 12 20:47:13.117121 containerd[1586]: time="2024-11-12T20:47:13.117109109Z" level=info msg="StopPodSandbox for \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\" returns successfully" Nov 12 20:47:13.117712 containerd[1586]: time="2024-11-12T20:47:13.117671068Z" level=info msg="RemovePodSandbox for \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\"" Nov 12 20:47:13.117712 containerd[1586]: time="2024-11-12T20:47:13.117699311Z" level=info msg="Forcibly stopping sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\"" Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.156 [WARNING][5797] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0", GenerateName:"calico-kube-controllers-74c7666879-", Namespace:"calico-system", SelfLink:"", UID:"78ca05ac-4333-451e-baf1-754b7ff398b3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c7666879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08236dfa18810ba8540941bb9eb12ff8d8075a533406c8fb1cf0a25a3dbc2cda", Pod:"calico-kube-controllers-74c7666879-4rdd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfe313825c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.156 [INFO][5797] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.156 [INFO][5797] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" iface="eth0" netns="" Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.156 [INFO][5797] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.156 [INFO][5797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.182 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" HandleID="k8s-pod-network.af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.183 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.183 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.188 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" HandleID="k8s-pod-network.af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.188 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" HandleID="k8s-pod-network.af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Workload="localhost-k8s-calico--kube--controllers--74c7666879--4rdd7-eth0" Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.189 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:13.194955 containerd[1586]: 2024-11-12 20:47:13.192 [INFO][5797] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74" Nov 12 20:47:13.195418 containerd[1586]: time="2024-11-12T20:47:13.195008646Z" level=info msg="TearDown network for sandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\" successfully" Nov 12 20:47:13.199768 containerd[1586]: time="2024-11-12T20:47:13.199713652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:13.199831 containerd[1586]: time="2024-11-12T20:47:13.199805236Z" level=info msg="RemovePodSandbox \"af947a1bfc25c797f4fd53dfb45961a308b0d90368e31bfd0382d2914d548a74\" returns successfully" Nov 12 20:47:13.200469 containerd[1586]: time="2024-11-12T20:47:13.200423041Z" level=info msg="StopPodSandbox for \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\"" Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.242 [WARNING][5827] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--5mwdx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de", Pod:"coredns-76f75df574-5mwdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8963253065c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.243 [INFO][5827] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.243 [INFO][5827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" iface="eth0" netns="" Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.243 [INFO][5827] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.243 [INFO][5827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.300 [INFO][5835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" HandleID="k8s-pod-network.8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.300 [INFO][5835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.301 [INFO][5835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.310 [WARNING][5835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" HandleID="k8s-pod-network.8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.310 [INFO][5835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" HandleID="k8s-pod-network.8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.313 [INFO][5835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:13.322391 containerd[1586]: 2024-11-12 20:47:13.318 [INFO][5827] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:47:13.322391 containerd[1586]: time="2024-11-12T20:47:13.322340985Z" level=info msg="TearDown network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\" successfully" Nov 12 20:47:13.322391 containerd[1586]: time="2024-11-12T20:47:13.322378716Z" level=info msg="StopPodSandbox for \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\" returns successfully" Nov 12 20:47:13.324991 containerd[1586]: time="2024-11-12T20:47:13.323354872Z" level=info msg="RemovePodSandbox for \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\"" Nov 12 20:47:13.324991 containerd[1586]: time="2024-11-12T20:47:13.323401431Z" level=info msg="Forcibly stopping sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\"" Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.376 [WARNING][5857] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--5mwdx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"02e1689f-1ed1-4fdf-a6dc-05b9d8e176d3", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b587c36a8a174d09e0ad3ffd66260b43917ee84440d73d49bb87ecb658e79de", Pod:"coredns-76f75df574-5mwdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8963253065c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.376 [INFO][5857] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.376 [INFO][5857] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" iface="eth0" netns="" Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.377 [INFO][5857] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.377 [INFO][5857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.413 [INFO][5865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" HandleID="k8s-pod-network.8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.414 [INFO][5865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.414 [INFO][5865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.425 [WARNING][5865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" HandleID="k8s-pod-network.8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.425 [INFO][5865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" HandleID="k8s-pod-network.8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Workload="localhost-k8s-coredns--76f75df574--5mwdx-eth0" Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.427 [INFO][5865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:47:13.436330 containerd[1586]: 2024-11-12 20:47:13.431 [INFO][5857] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6" Nov 12 20:47:13.437051 containerd[1586]: time="2024-11-12T20:47:13.436317369Z" level=info msg="TearDown network for sandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\" successfully" Nov 12 20:47:13.472114 containerd[1586]: time="2024-11-12T20:47:13.471994745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:47:13.472424 containerd[1586]: time="2024-11-12T20:47:13.472370639Z" level=info msg="RemovePodSandbox \"8310c4f1bff05c7bc7044af121b609520014cbc60baa6ba02653422405385fa6\" returns successfully" Nov 12 20:47:15.774792 systemd[1]: Started sshd@15-10.0.0.56:22-10.0.0.1:55216.service - OpenSSH per-connection server daemon (10.0.0.1:55216). Nov 12 20:47:15.814782 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 55216 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:15.816740 sshd[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:15.821277 systemd-logind[1564]: New session 16 of user core. Nov 12 20:47:15.832765 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:47:15.982121 sshd[5873]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:15.986219 systemd[1]: sshd@15-10.0.0.56:22-10.0.0.1:55216.service: Deactivated successfully. Nov 12 20:47:15.988311 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:47:15.990706 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:47:15.993602 systemd-logind[1564]: Removed session 16. Nov 12 20:47:20.996827 systemd[1]: Started sshd@16-10.0.0.56:22-10.0.0.1:55222.service - OpenSSH per-connection server daemon (10.0.0.1:55222). Nov 12 20:47:21.031969 sshd[5917]: Accepted publickey for core from 10.0.0.1 port 55222 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:21.033826 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:21.038325 systemd-logind[1564]: New session 17 of user core. Nov 12 20:47:21.046791 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:47:21.184408 sshd[5917]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:21.189929 systemd[1]: sshd@16-10.0.0.56:22-10.0.0.1:55222.service: Deactivated successfully. Nov 12 20:47:21.193628 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:47:21.194343 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:47:21.195431 systemd-logind[1564]: Removed session 17. Nov 12 20:47:26.197760 systemd[1]: Started sshd@17-10.0.0.56:22-10.0.0.1:44944.service - OpenSSH per-connection server daemon (10.0.0.1:44944). Nov 12 20:47:26.235602 sshd[5934]: Accepted publickey for core from 10.0.0.1 port 44944 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:26.237487 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:26.242153 systemd-logind[1564]: New session 18 of user core. Nov 12 20:47:26.255169 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:47:26.300689 kubelet[2815]: E1112 20:47:26.300645 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:47:26.376891 sshd[5934]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:26.384729 systemd[1]: Started sshd@18-10.0.0.56:22-10.0.0.1:44948.service - OpenSSH per-connection server daemon (10.0.0.1:44948). Nov 12 20:47:26.385288 systemd[1]: sshd@17-10.0.0.56:22-10.0.0.1:44944.service: Deactivated successfully. Nov 12 20:47:26.388704 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:47:26.390581 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:47:26.392426 systemd-logind[1564]: Removed session 18. Nov 12 20:47:26.426103 sshd[5946]: Accepted publickey for core from 10.0.0.1 port 44948 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:26.427796 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:26.432878 systemd-logind[1564]: New session 19 of user core. Nov 12 20:47:26.441768 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:47:26.732973 sshd[5946]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:26.741702 systemd[1]: Started sshd@19-10.0.0.56:22-10.0.0.1:44952.service - OpenSSH per-connection server daemon (10.0.0.1:44952). Nov 12 20:47:26.742308 systemd[1]: sshd@18-10.0.0.56:22-10.0.0.1:44948.service: Deactivated successfully. Nov 12 20:47:26.746612 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:47:26.748208 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:47:26.750700 systemd-logind[1564]: Removed session 19. Nov 12 20:47:26.777036 sshd[5960]: Accepted publickey for core from 10.0.0.1 port 44952 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:26.778931 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:26.783659 systemd-logind[1564]: New session 20 of user core. Nov 12 20:47:26.794758 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:47:28.746485 sshd[5960]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:28.756609 systemd[1]: Started sshd@20-10.0.0.56:22-10.0.0.1:44968.service - OpenSSH per-connection server daemon (10.0.0.1:44968). Nov 12 20:47:28.757625 systemd[1]: sshd@19-10.0.0.56:22-10.0.0.1:44952.service: Deactivated successfully. Nov 12 20:47:28.767379 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:47:28.768688 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:47:28.769900 systemd-logind[1564]: Removed session 20. Nov 12 20:47:28.804835 sshd[5986]: Accepted publickey for core from 10.0.0.1 port 44968 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:28.806587 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:28.810632 systemd-logind[1564]: New session 21 of user core. Nov 12 20:47:28.819832 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:47:29.036555 sshd[5986]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:29.047117 systemd[1]: Started sshd@21-10.0.0.56:22-10.0.0.1:44974.service - OpenSSH per-connection server daemon (10.0.0.1:44974). Nov 12 20:47:29.049516 systemd[1]: sshd@20-10.0.0.56:22-10.0.0.1:44968.service: Deactivated successfully. Nov 12 20:47:29.057978 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:47:29.059841 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:47:29.061525 systemd-logind[1564]: Removed session 21. Nov 12 20:47:29.089946 sshd[6001]: Accepted publickey for core from 10.0.0.1 port 44974 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:29.091841 sshd[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:29.097396 systemd-logind[1564]: New session 22 of user core. Nov 12 20:47:29.106949 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:47:29.229233 sshd[6001]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:29.234566 systemd[1]: sshd@21-10.0.0.56:22-10.0.0.1:44974.service: Deactivated successfully. Nov 12 20:47:29.238072 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:47:29.239491 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:47:29.240581 systemd-logind[1564]: Removed session 22. Nov 12 20:47:31.300606 kubelet[2815]: E1112 20:47:31.300563 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:47:34.242692 systemd[1]: Started sshd@22-10.0.0.56:22-10.0.0.1:44984.service - OpenSSH per-connection server daemon (10.0.0.1:44984). Nov 12 20:47:34.280226 sshd[6061]: Accepted publickey for core from 10.0.0.1 port 44984 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:34.282349 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:34.286975 systemd-logind[1564]: New session 23 of user core. Nov 12 20:47:34.295764 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:47:34.307708 kubelet[2815]: E1112 20:47:34.301362 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:47:34.444499 sshd[6061]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:34.449797 systemd[1]: sshd@22-10.0.0.56:22-10.0.0.1:44984.service: Deactivated successfully. Nov 12 20:47:34.452799 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:47:34.452808 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:47:34.453871 systemd-logind[1564]: Removed session 23. Nov 12 20:47:37.726215 kubelet[2815]: I1112 20:47:37.726155 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:47:39.456707 systemd[1]: Started sshd@23-10.0.0.56:22-10.0.0.1:47014.service - OpenSSH per-connection server daemon (10.0.0.1:47014). Nov 12 20:47:39.492134 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 47014 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:39.494188 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:39.499330 systemd-logind[1564]: New session 24 of user core. Nov 12 20:47:39.513724 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:47:39.641658 sshd[6089]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:39.646288 systemd[1]: sshd@23-10.0.0.56:22-10.0.0.1:47014.service: Deactivated successfully. Nov 12 20:47:39.649042 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:47:39.649768 systemd-logind[1564]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:47:39.650867 systemd-logind[1564]: Removed session 24. Nov 12 20:47:44.663765 systemd[1]: Started sshd@24-10.0.0.56:22-10.0.0.1:47024.service - OpenSSH per-connection server daemon (10.0.0.1:47024). Nov 12 20:47:44.698531 sshd[6104]: Accepted publickey for core from 10.0.0.1 port 47024 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:44.700546 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:44.705259 systemd-logind[1564]: New session 25 of user core. Nov 12 20:47:44.715868 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:47:44.840406 sshd[6104]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:44.845430 systemd[1]: sshd@24-10.0.0.56:22-10.0.0.1:47024.service: Deactivated successfully. Nov 12 20:47:44.848810 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:47:44.849593 systemd-logind[1564]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:47:44.850459 systemd-logind[1564]: Removed session 25. Nov 12 20:47:49.863841 systemd[1]: Started sshd@25-10.0.0.56:22-10.0.0.1:52618.service - OpenSSH per-connection server daemon (10.0.0.1:52618). Nov 12 20:47:49.918673 sshd[6119]: Accepted publickey for core from 10.0.0.1 port 52618 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:47:49.921679 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:49.929073 systemd-logind[1564]: New session 26 of user core. Nov 12 20:47:49.934996 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:47:50.189704 sshd[6119]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:50.204150 systemd[1]: sshd@25-10.0.0.56:22-10.0.0.1:52618.service: Deactivated successfully. Nov 12 20:47:50.213935 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:47:50.216265 systemd-logind[1564]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:47:50.219857 systemd-logind[1564]: Removed session 26.