Nov 12 20:43:31.944331 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:43:31.944366 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:31.944384 kernel: BIOS-provided physical RAM map: Nov 12 20:43:31.944390 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:43:31.944396 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 20:43:31.944402 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 20:43:31.944409 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 20:43:31.944415 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 20:43:31.944421 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 12 20:43:31.944427 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 12 20:43:31.944435 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 12 20:43:31.944441 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 12 20:43:31.944447 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 12 20:43:31.944453 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 12 20:43:31.944461 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 12 20:43:31.944467 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 20:43:31.944476 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 12 20:43:31.944483 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 12 20:43:31.944489 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 20:43:31.944495 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:43:31.944501 kernel: NX (Execute Disable) protection: active Nov 12 20:43:31.944508 kernel: APIC: Static calls initialized Nov 12 20:43:31.944514 kernel: efi: EFI v2.7 by EDK II Nov 12 20:43:31.944520 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Nov 12 20:43:31.944527 kernel: SMBIOS 2.8 present. Nov 12 20:43:31.944533 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 12 20:43:31.944539 kernel: Hypervisor detected: KVM Nov 12 20:43:31.944548 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:43:31.944554 kernel: kvm-clock: using sched offset of 3947709443 cycles Nov 12 20:43:31.944561 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:43:31.944568 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:43:31.944574 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:43:31.944581 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:43:31.944588 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 12 20:43:31.944594 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:43:31.944601 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:43:31.944610 kernel: Using GB pages for direct mapping Nov 12 20:43:31.944616 kernel: Secure boot disabled Nov 12 20:43:31.944635 kernel: ACPI: Early table checksum verification disabled Nov 12 20:43:31.944642 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 12 20:43:31.944652 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 12 20:43:31.944659 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:31.944666 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:31.944675 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 12 20:43:31.944682 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:31.944689 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:31.944695 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:31.944702 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:31.944709 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 12 20:43:31.944716 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 12 20:43:31.944725 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Nov 12 20:43:31.944731 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 12 20:43:31.944738 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 12 20:43:31.944745 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 12 20:43:31.944752 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 12 20:43:31.944758 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 12 20:43:31.944765 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 12 20:43:31.944772 kernel: No NUMA configuration found Nov 12 20:43:31.944779 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 12 20:43:31.944786 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 12 20:43:31.944795 kernel: Zone ranges: Nov 12 20:43:31.944802 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:43:31.944809 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 12 20:43:31.944816 kernel: Normal empty Nov 12 20:43:31.944823 kernel: Movable zone start for each node Nov 12 20:43:31.944829 kernel: Early memory node ranges Nov 12 20:43:31.944836 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:43:31.944843 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 12 20:43:31.944849 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 12 20:43:31.944858 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 12 20:43:31.944865 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 12 20:43:31.944871 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 12 20:43:31.944878 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 12 20:43:31.944885 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:43:31.944892 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:43:31.944898 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 12 20:43:31.944905 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:43:31.944911 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 12 20:43:31.944921 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 12 20:43:31.944927 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 12 20:43:31.944934 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:43:31.944941 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:43:31.944947 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:43:31.944954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:43:31.944961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:43:31.944967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:43:31.944974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:43:31.944981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:43:31.944990 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:43:31.944997 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:43:31.945003 kernel: TSC deadline timer available Nov 12 20:43:31.945010 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:43:31.945017 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:43:31.945023 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:43:31.945030 kernel: kvm-guest: setup PV sched yield Nov 12 20:43:31.945037 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:43:31.945043 kernel: Booting paravirtualized kernel on KVM Nov 12 20:43:31.945053 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:43:31.945060 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:43:31.945066 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:43:31.945073 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:43:31.945080 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:43:31.945086 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:43:31.945093 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:43:31.945101 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:31.945111 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:43:31.945117 kernel: random: crng init done Nov 12 20:43:31.945124 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:43:31.945131 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:43:31.945138 kernel: Fallback order for Node 0: 0 Nov 12 20:43:31.945156 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 12 20:43:31.945167 kernel: Policy zone: DMA32 Nov 12 20:43:31.945175 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:43:31.945185 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 171128K reserved, 0K cma-reserved) Nov 12 20:43:31.945197 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:43:31.945206 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:43:31.945215 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:43:31.945222 kernel: Dynamic Preempt: voluntary Nov 12 20:43:31.945237 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:43:31.945247 kernel: rcu: RCU event tracing is enabled. Nov 12 20:43:31.945254 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:43:31.945261 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:43:31.945269 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:43:31.945276 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:43:31.945283 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:43:31.945290 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:43:31.945300 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:43:31.945307 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:43:31.945314 kernel: Console: colour dummy device 80x25 Nov 12 20:43:31.945321 kernel: printk: console [ttyS0] enabled Nov 12 20:43:31.945328 kernel: ACPI: Core revision 20230628 Nov 12 20:43:31.945337 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:43:31.945344 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:43:31.945351 kernel: x2apic enabled Nov 12 20:43:31.945358 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:43:31.945365 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:43:31.945373 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:43:31.945380 kernel: kvm-guest: setup PV IPIs Nov 12 20:43:31.945387 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:43:31.945394 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:43:31.945403 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:43:31.945410 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:43:31.945417 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:43:31.945424 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:43:31.945432 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:43:31.945439 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:43:31.945446 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:43:31.945453 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:43:31.945460 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:43:31.945469 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:43:31.945476 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:43:31.945484 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:43:31.945491 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:43:31.945498 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:43:31.945505 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:43:31.945512 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:43:31.945519 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:43:31.945529 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:43:31.945536 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:43:31.945543 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:43:31.945550 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:43:31.945557 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:43:31.945564 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:43:31.945571 kernel: landlock: Up and running. Nov 12 20:43:31.945578 kernel: SELinux: Initializing. Nov 12 20:43:31.945585 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:43:31.945594 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:43:31.945602 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:43:31.945609 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:43:31.945616 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:43:31.945634 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:43:31.945642 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:43:31.945649 kernel: ... version: 0 Nov 12 20:43:31.945656 kernel: ... bit width: 48 Nov 12 20:43:31.945663 kernel: ... generic registers: 6 Nov 12 20:43:31.945673 kernel: ... value mask: 0000ffffffffffff Nov 12 20:43:31.945680 kernel: ... max period: 00007fffffffffff Nov 12 20:43:31.945687 kernel: ... fixed-purpose events: 0 Nov 12 20:43:31.945694 kernel: ... event mask: 000000000000003f Nov 12 20:43:31.945701 kernel: signal: max sigframe size: 1776 Nov 12 20:43:31.945708 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:43:31.945715 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:43:31.945722 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:43:31.945730 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:43:31.945739 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:43:31.945746 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:43:31.945753 kernel: smpboot: Max logical packages: 1 Nov 12 20:43:31.945760 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:43:31.945767 kernel: devtmpfs: initialized Nov 12 20:43:31.945775 kernel: x86/mm: Memory block size: 128MB Nov 12 20:43:31.945782 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 12 20:43:31.945789 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 12 20:43:31.945796 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 12 20:43:31.945806 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 12 20:43:31.945813 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 12 20:43:31.945820 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:43:31.945828 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:43:31.945835 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:43:31.945842 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:43:31.945849 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:43:31.945856 kernel: audit: type=2000 audit(1731444211.240:1): state=initialized audit_enabled=0 res=1 Nov 12 20:43:31.945863 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:43:31.945873 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:43:31.945880 kernel: cpuidle: using governor menu Nov 12 20:43:31.945887 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:43:31.945894 kernel: dca service started, version 1.12.1 Nov 12 20:43:31.945902 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:43:31.945916 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:43:31.945924 kernel: PCI: Using configuration type 1 for base access Nov 12 20:43:31.945932 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:43:31.945939 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:43:31.945949 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:43:31.945956 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:43:31.945963 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:43:31.945970 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:43:31.945977 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:43:31.945984 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:43:31.945991 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:43:31.945999 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:43:31.946006 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:43:31.946015 kernel: ACPI: Interpreter enabled Nov 12 20:43:31.946022 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:43:31.946029 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:43:31.946037 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:43:31.946044 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:43:31.946051 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:43:31.946058 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:43:31.946246 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:43:31.946383 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:43:31.946503 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:43:31.946513 kernel: PCI host bridge to bus 0000:00 Nov 12 20:43:31.946649 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:43:31.946762 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:43:31.946871 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:43:31.946980 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:43:31.947095 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:31.947276 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 12 20:43:31.947395 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:43:31.947540 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:43:31.947720 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:43:31.947854 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 12 20:43:31.948020 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 12 20:43:31.948186 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 12 20:43:31.948336 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 12 20:43:31.948507 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:43:31.948721 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:43:31.948883 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 12 20:43:31.949044 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 12 20:43:31.949230 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 12 20:43:31.949401 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:43:31.949560 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 12 20:43:31.949747 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 12 20:43:31.949904 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 12 20:43:31.950072 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:43:31.950241 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 12 20:43:31.950471 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 12 20:43:31.950668 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 12 20:43:31.950829 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 12 20:43:31.951000 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:43:31.951174 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:43:31.951347 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:43:31.951503 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 12 20:43:31.951727 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 12 20:43:31.951902 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:43:31.952056 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 12 20:43:31.952071 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:43:31.952082 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:43:31.952092 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:43:31.952103 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:43:31.952119 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:43:31.952129 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:43:31.952140 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:43:31.952162 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:43:31.952173 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:43:31.952184 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:43:31.952194 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:43:31.952204 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:43:31.952215 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:43:31.952229 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:43:31.952239 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:43:31.952249 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:43:31.952259 kernel: iommu: Default domain type: Translated Nov 12 20:43:31.952270 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:43:31.952280 kernel: efivars: Registered efivars operations Nov 12 20:43:31.952290 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:43:31.952300 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:43:31.952310 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 12 20:43:31.952323 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 12 20:43:31.952333 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 12 20:43:31.952343 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 12 20:43:31.952500 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:43:31.952673 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:43:31.952826 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:43:31.952840 kernel: vgaarb: loaded Nov 12 20:43:31.952850 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:43:31.952860 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:43:31.952875 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:43:31.952885 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:43:31.952896 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:43:31.952906 kernel: pnp: PnP ACPI init Nov 12 20:43:31.953066 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:43:31.953083 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:43:31.953094 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:43:31.953104 kernel: NET: Registered PF_INET protocol family Nov 12 20:43:31.953119 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:43:31.953130 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:43:31.953140 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:43:31.953163 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:43:31.953173 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:43:31.953183 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:43:31.953193 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:43:31.953203 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:43:31.953214 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:43:31.953227 kernel: NET: Registered PF_XDP protocol family Nov 12 20:43:31.953385 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 12 20:43:31.953542 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 12 20:43:31.953719 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:43:31.953868 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:43:31.954010 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:43:31.954142 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:43:31.954294 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:31.954442 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 12 20:43:31.954457 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:43:31.954467 kernel: Initialise system trusted keyrings Nov 12 20:43:31.954477 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:43:31.954486 kernel: Key type asymmetric registered Nov 12 20:43:31.954496 kernel: Asymmetric key parser 'x509' registered Nov 12 20:43:31.954506 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:43:31.954516 kernel: io scheduler mq-deadline registered Nov 12 20:43:31.954526 kernel: io scheduler kyber registered Nov 12 20:43:31.954540 kernel: io scheduler bfq registered Nov 12 20:43:31.954549 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:43:31.954560 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:43:31.954570 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:43:31.954580 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:43:31.954590 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:43:31.954601 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:43:31.954611 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:43:31.954674 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:43:31.954690 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:43:31.954850 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:43:31.954867 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:43:31.955010 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:43:31.955157 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:43:31 UTC (1731444211) Nov 12 20:43:31.955299 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:43:31.955313 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:43:31.955328 kernel: efifb: probing for efifb Nov 12 20:43:31.955338 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 12 20:43:31.955348 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 12 20:43:31.955357 kernel: efifb: scrolling: redraw Nov 12 20:43:31.955367 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 12 20:43:31.955378 kernel: Console: switching to colour frame buffer device 100x37 Nov 12 20:43:31.955411 kernel: fb0: EFI VGA frame buffer device Nov 12 20:43:31.955425 kernel: pstore: Using crash dump compression: deflate Nov 12 20:43:31.955435 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:43:31.955449 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:43:31.955459 kernel: Segment Routing with IPv6 Nov 12 20:43:31.955470 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:43:31.955480 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:43:31.955491 kernel: Key type dns_resolver registered Nov 12 20:43:31.955501 kernel: IPI shorthand broadcast: enabled Nov 12 20:43:31.955511 kernel: sched_clock: Marking stable (623003378, 114915843)->(757747469, -19828248) Nov 12 20:43:31.955522 kernel: registered taskstats version 1 Nov 12 20:43:31.955532 kernel: Loading compiled-in X.509 certificates Nov 12 20:43:31.955543 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:43:31.955558 kernel: Key type .fscrypt registered Nov 12 20:43:31.955568 kernel: Key type fscrypt-provisioning registered Nov 12 20:43:31.955578 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:43:31.955588 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:43:31.955598 kernel: ima: No architecture policies found Nov 12 20:43:31.955608 kernel: clk: Disabling unused clocks Nov 12 20:43:31.955618 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:43:31.955642 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:43:31.955656 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:43:31.955676 kernel: Run /init as init process Nov 12 20:43:31.955696 kernel: with arguments: Nov 12 20:43:31.955706 kernel: /init Nov 12 20:43:31.955716 kernel: with environment: Nov 12 20:43:31.955725 kernel: HOME=/ Nov 12 20:43:31.955733 kernel: TERM=linux Nov 12 20:43:31.955744 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:43:31.955757 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:43:31.955772 systemd[1]: Detected virtualization kvm. Nov 12 20:43:31.955783 systemd[1]: Detected architecture x86-64. Nov 12 20:43:31.955791 systemd[1]: Running in initrd. Nov 12 20:43:31.955801 systemd[1]: No hostname configured, using default hostname. Nov 12 20:43:31.955811 systemd[1]: Hostname set to . Nov 12 20:43:31.955819 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:43:31.955827 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:43:31.955835 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:31.955846 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:31.955857 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:43:31.955869 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:43:31.955880 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:43:31.955895 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:43:31.955908 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:43:31.955920 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:43:31.955931 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:31.955942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:31.955953 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:43:31.955965 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:43:31.955979 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:43:31.955990 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:43:31.956002 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:43:31.956013 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:43:31.956024 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:43:31.956035 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:43:31.956047 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:31.956058 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:31.956073 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:31.956085 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:43:31.956096 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:43:31.956108 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:43:31.956119 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:43:31.956131 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:43:31.956142 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:43:31.956163 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:43:31.956175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:31.956191 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:43:31.956203 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:31.956214 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:43:31.956253 systemd-journald[192]: Collecting audit messages is disabled. Nov 12 20:43:31.956284 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:43:31.956297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:31.956309 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:31.956321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:43:31.956337 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:43:31.956349 systemd-journald[192]: Journal started Nov 12 20:43:31.956374 systemd-journald[192]: Runtime Journal (/run/log/journal/97190dbdb4bc4942b0fceee8ea1448c1) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:43:31.944432 systemd-modules-load[193]: Inserted module 'overlay' Nov 12 20:43:31.958135 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:43:31.962757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:43:31.969332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:31.975506 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:31.979800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:43:31.983535 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:43:31.984178 systemd-modules-load[193]: Inserted module 'br_netfilter' Nov 12 20:43:31.985297 kernel: Bridge firewalling registered Nov 12 20:43:31.986192 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:31.988433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:43:31.994523 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:32.002561 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:32.005068 dracut-cmdline[221]: dracut-dracut-053 Nov 12 20:43:32.006507 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:32.017897 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:43:32.053963 systemd-resolved[238]: Positive Trust Anchors: Nov 12 20:43:32.053984 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:43:32.054016 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:43:32.059437 systemd-resolved[238]: Defaulting to hostname 'linux'. Nov 12 20:43:32.060607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:43:32.065824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:32.100668 kernel: SCSI subsystem initialized Nov 12 20:43:32.110675 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:43:32.122670 kernel: iscsi: registered transport (tcp) Nov 12 20:43:32.144663 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:43:32.144736 kernel: QLogic iSCSI HBA Driver Nov 12 20:43:32.201700 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:43:32.212756 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:43:32.238004 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:43:32.238079 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:43:32.239125 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:43:32.281665 kernel: raid6: avx2x4 gen() 30339 MB/s Nov 12 20:43:32.298655 kernel: raid6: avx2x2 gen() 28406 MB/s Nov 12 20:43:32.315751 kernel: raid6: avx2x1 gen() 21055 MB/s Nov 12 20:43:32.315776 kernel: raid6: using algorithm avx2x4 gen() 30339 MB/s Nov 12 20:43:32.333765 kernel: raid6: .... xor() 7265 MB/s, rmw enabled Nov 12 20:43:32.333791 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:43:32.355653 kernel: xor: automatically using best checksumming function avx Nov 12 20:43:32.522674 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:43:32.535935 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:43:32.545797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:32.557919 systemd-udevd[411]: Using default interface naming scheme 'v255'. Nov 12 20:43:32.562145 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:32.569865 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:43:32.584128 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Nov 12 20:43:32.620720 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:43:32.636915 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:43:32.698279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:32.707974 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:43:32.719241 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:43:32.721250 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:43:32.722029 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:32.722371 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:43:32.731765 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:43:32.740705 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:43:32.753476 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:43:32.753637 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:43:32.753649 kernel: GPT:9289727 != 19775487 Nov 12 20:43:32.753660 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:43:32.753669 kernel: GPT:9289727 != 19775487 Nov 12 20:43:32.753679 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:43:32.753693 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:43:32.741773 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:43:32.762729 kernel: libata version 3.00 loaded. Nov 12 20:43:32.763649 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:43:32.773950 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:43:32.774068 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:32.776014 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:32.784861 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:43:32.784886 kernel: AES CTR mode by8 optimization enabled Nov 12 20:43:32.784897 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:43:32.807738 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:43:32.807763 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:43:32.807912 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:43:32.808047 kernel: scsi host0: ahci Nov 12 20:43:32.808216 kernel: scsi host1: ahci Nov 12 20:43:32.808372 kernel: scsi host2: ahci Nov 12 20:43:32.808513 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (468) Nov 12 20:43:32.808524 kernel: scsi host3: ahci Nov 12 20:43:32.808690 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (462) Nov 12 20:43:32.808701 kernel: scsi host4: ahci Nov 12 20:43:32.808839 kernel: scsi host5: ahci Nov 12 20:43:32.808977 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 12 20:43:32.808988 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 12 20:43:32.808998 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 12 20:43:32.809008 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 12 20:43:32.809022 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 12 20:43:32.809031 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 12 20:43:32.777333 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:32.777451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:32.782001 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:32.792971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:32.817492 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:43:32.820347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:32.834388 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:43:32.839533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:43:32.840886 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:43:32.848047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:43:32.862738 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:43:32.868531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:32.868595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:32.870997 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:32.872716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:32.888485 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:32.889814 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:32.921481 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:33.000154 disk-uuid[553]: Primary Header is updated. Nov 12 20:43:33.000154 disk-uuid[553]: Secondary Entries is updated. Nov 12 20:43:33.000154 disk-uuid[553]: Secondary Header is updated. Nov 12 20:43:33.004668 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:43:33.009660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:43:33.117257 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:33.117332 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:33.119100 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:43:33.119190 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:33.119202 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:33.120656 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:33.121668 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:43:33.121692 kernel: ata3.00: applying bridge limits Nov 12 20:43:33.122730 kernel: ata3.00: configured for UDMA/100 Nov 12 20:43:33.123651 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:43:33.186669 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:43:33.204663 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:43:33.204683 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:43:34.009373 disk-uuid[567]: The operation has completed successfully. Nov 12 20:43:34.010707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:43:34.033241 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:43:34.033363 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:43:34.060789 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:43:34.065513 sh[594]: Success Nov 12 20:43:34.078676 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:43:34.113804 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:43:34.131602 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:43:34.134615 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:43:34.145851 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:43:34.145894 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:34.145908 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:43:34.147215 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:43:34.148192 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:43:34.154005 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:43:34.155910 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:43:34.171772 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:43:34.174314 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:43:34.184327 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:34.184385 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:34.184400 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:43:34.187659 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:43:34.197396 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:43:34.198893 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:34.209387 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:43:34.217809 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:43:34.272336 ignition[692]: Ignition 2.19.0 Nov 12 20:43:34.272351 ignition[692]: Stage: fetch-offline Nov 12 20:43:34.272405 ignition[692]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:34.272418 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:34.272548 ignition[692]: parsed url from cmdline: "" Nov 12 20:43:34.272553 ignition[692]: no config URL provided Nov 12 20:43:34.272560 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:43:34.272572 ignition[692]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:43:34.272605 ignition[692]: op(1): [started] loading QEMU firmware config module Nov 12 20:43:34.272614 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:43:34.283600 ignition[692]: op(1): [finished] loading QEMU firmware config module Nov 12 20:43:34.301619 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:43:34.314813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:43:34.326480 ignition[692]: parsing config with SHA512: 6677781d36c4922b558669fced764f781060bc1ceb8005ffef4e2a5c09e476e0270a36ce5c6218736633543a44995f810106d1d61e74bdbbfc2bb7cc6d9bfdc7 Nov 12 20:43:34.330391 unknown[692]: fetched base config from "system" Nov 12 20:43:34.330405 unknown[692]: fetched user config from "qemu" Nov 12 20:43:34.330900 ignition[692]: fetch-offline: fetch-offline passed Nov 12 20:43:34.330970 ignition[692]: Ignition finished successfully Nov 12 20:43:34.335160 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:43:34.338101 systemd-networkd[784]: lo: Link UP Nov 12 20:43:34.338108 systemd-networkd[784]: lo: Gained carrier Nov 12 20:43:34.339657 systemd-networkd[784]: Enumeration completed Nov 12 20:43:34.339982 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:43:34.340020 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:43:34.340024 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:43:34.341683 systemd-networkd[784]: eth0: Link UP Nov 12 20:43:34.341687 systemd-networkd[784]: eth0: Gained carrier Nov 12 20:43:34.341693 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:43:34.341771 systemd[1]: Reached target network.target - Network. Nov 12 20:43:34.343380 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:43:34.349430 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:43:34.357673 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:43:34.366393 ignition[787]: Ignition 2.19.0 Nov 12 20:43:34.366407 ignition[787]: Stage: kargs Nov 12 20:43:34.366644 ignition[787]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:34.366662 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:34.367843 ignition[787]: kargs: kargs passed Nov 12 20:43:34.367900 ignition[787]: Ignition finished successfully Nov 12 20:43:34.371955 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:43:34.383761 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:43:34.398575 ignition[797]: Ignition 2.19.0 Nov 12 20:43:34.398585 ignition[797]: Stage: disks Nov 12 20:43:34.398772 ignition[797]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:34.398786 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:34.399566 ignition[797]: disks: disks passed Nov 12 20:43:34.399608 ignition[797]: Ignition finished successfully Nov 12 20:43:34.405886 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:43:34.407248 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:43:34.409441 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:43:34.410839 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:43:34.412005 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:43:34.414358 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:43:34.432793 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:43:34.446706 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:43:34.453347 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:43:34.472748 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:43:34.565662 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:43:34.566557 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:43:34.568999 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:43:34.583775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:43:34.586054 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:43:34.587393 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:43:34.587446 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:43:34.600483 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Nov 12 20:43:34.600508 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:34.600520 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:34.600530 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:43:34.587475 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:43:34.595569 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:43:34.601949 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:43:34.606308 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:43:34.607891 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:43:34.644212 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:43:34.649040 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:43:34.655160 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:43:34.659404 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:43:34.751837 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:43:34.765755 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:43:34.768687 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:43:34.775639 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:34.795026 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:43:34.797253 ignition[931]: INFO : Ignition 2.19.0 Nov 12 20:43:34.797253 ignition[931]: INFO : Stage: mount Nov 12 20:43:34.797253 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:34.797253 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:34.802407 ignition[931]: INFO : mount: mount passed Nov 12 20:43:34.802407 ignition[931]: INFO : Ignition finished successfully Nov 12 20:43:34.799867 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:43:34.804745 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:43:35.144537 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:43:35.156771 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:43:35.165405 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (945) Nov 12 20:43:35.165437 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:35.165451 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:35.166260 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:43:35.169642 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:43:35.171096 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:43:35.189484 ignition[962]: INFO : Ignition 2.19.0 Nov 12 20:43:35.189484 ignition[962]: INFO : Stage: files Nov 12 20:43:35.191426 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:35.191426 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:35.191426 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:43:35.194879 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:43:35.194879 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:43:35.194879 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:43:35.199267 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:43:35.199267 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:43:35.199267 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:43:35.199267 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:43:35.195418 unknown[962]: wrote ssh authorized keys file for user: core Nov 12 20:43:35.240707 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:43:35.381269 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:43:35.383613 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 12 20:43:35.722410 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:43:35.973771 systemd-networkd[784]: eth0: Gained IPv6LL Nov 12 20:43:36.053647 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:43:36.053647 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:43:36.057250 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:43:36.059141 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:43:36.059141 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:43:36.059141 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 12 20:43:36.059141 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:43:36.059141 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:43:36.059141 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 12 20:43:36.059141 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:43:36.089739 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:43:36.094687 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:43:36.096405 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:43:36.096405 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:43:36.096405 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:43:36.096405 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:43:36.096405 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:43:36.096405 ignition[962]: INFO : files: files passed Nov 12 20:43:36.096405 ignition[962]: INFO : Ignition finished successfully Nov 12 20:43:36.107427 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:43:36.125780 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:43:36.126975 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:43:36.134204 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:43:36.135298 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:43:36.138674 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:43:36.141388 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:36.141388 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:36.144737 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:36.147971 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:43:36.150876 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:43:36.160752 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:43:36.193185 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:43:36.194212 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:43:36.196876 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:43:36.198868 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:43:36.201107 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:43:36.203478 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:43:36.219685 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:43:36.223407 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:43:36.235731 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:36.238238 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:36.240798 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:43:36.242834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:43:36.243982 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:43:36.246810 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:43:36.249103 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:43:36.251296 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:43:36.253756 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:43:36.256320 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:43:36.258822 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:43:36.261140 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:43:36.263686 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:43:36.265953 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:43:36.268209 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:43:36.269993 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:43:36.271147 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:43:36.273661 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:36.276008 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:36.278607 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:43:36.279755 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:36.282400 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:43:36.283516 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:43:36.285958 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:43:36.287164 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:43:36.289763 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:43:36.291684 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:43:36.296682 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:36.299650 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:43:36.301656 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:43:36.303716 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:43:36.304713 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:43:36.306872 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:43:36.307891 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:43:36.310162 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:43:36.311505 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:43:36.314319 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:43:36.315435 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:43:36.328778 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:43:36.330913 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:43:36.332002 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:36.335579 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:43:36.337539 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:43:36.338787 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:36.340325 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:43:36.341680 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:43:36.345393 ignition[1017]: INFO : Ignition 2.19.0 Nov 12 20:43:36.345393 ignition[1017]: INFO : Stage: umount Nov 12 20:43:36.345393 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:36.345393 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:36.349967 ignition[1017]: INFO : umount: umount passed Nov 12 20:43:36.349967 ignition[1017]: INFO : Ignition finished successfully Nov 12 20:43:36.349798 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:43:36.349932 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:43:36.353218 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:43:36.353349 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:43:36.355561 systemd[1]: Stopped target network.target - Network. Nov 12 20:43:36.357031 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:43:36.357111 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:43:36.359320 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:43:36.359381 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:43:36.361437 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:43:36.361495 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:43:36.363981 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:43:36.364052 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:43:36.366355 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:43:36.368472 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:43:36.369659 systemd-networkd[784]: eth0: DHCPv6 lease lost Nov 12 20:43:36.371823 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:43:36.372429 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:43:36.372565 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:43:36.374859 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:43:36.374927 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:36.384733 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:43:36.386547 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:43:36.386612 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:43:36.388791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:36.391886 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:43:36.392018 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:43:36.395966 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:43:36.396025 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:36.397365 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:43:36.397413 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:36.397830 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:43:36.397872 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:36.404475 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:43:36.404599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:43:36.405505 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:43:36.405682 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:36.407641 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:43:36.407703 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:36.409226 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:43:36.409266 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:36.411123 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:43:36.411169 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:43:36.414808 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:43:36.414862 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:43:36.417434 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:43:36.417483 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:36.421309 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:43:36.422183 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:43:36.422234 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:36.422540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:36.422580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:36.434975 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:43:36.435095 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:43:36.590853 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:43:36.590975 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:43:36.592989 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:43:36.593321 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:43:36.593368 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:43:36.603803 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:43:36.612066 systemd[1]: Switching root. Nov 12 20:43:36.640465 systemd-journald[192]: Journal stopped Nov 12 20:43:37.804374 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 12 20:43:37.804436 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:43:37.804453 kernel: SELinux: policy capability open_perms=1 Nov 12 20:43:37.804468 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:43:37.804479 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:43:37.804490 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:43:37.804502 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:43:37.804513 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:43:37.804524 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:43:37.804535 kernel: audit: type=1403 audit(1731444217.052:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:43:37.804547 systemd[1]: Successfully loaded SELinux policy in 40.412ms. Nov 12 20:43:37.804571 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.903ms. Nov 12 20:43:37.804584 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:43:37.804597 systemd[1]: Detected virtualization kvm. Nov 12 20:43:37.804609 systemd[1]: Detected architecture x86-64. Nov 12 20:43:37.804632 systemd[1]: Detected first boot. Nov 12 20:43:37.804644 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:43:37.804656 zram_generator::config[1061]: No configuration found. Nov 12 20:43:37.804669 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:43:37.804683 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:43:37.804696 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:43:37.804708 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:43:37.804720 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:43:37.804732 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:43:37.804748 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:43:37.804761 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:43:37.804773 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:43:37.804785 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:43:37.804799 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:43:37.804811 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:43:37.804823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:37.804836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:37.804848 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:43:37.804860 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:43:37.804874 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:43:37.804886 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:43:37.804897 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:43:37.804911 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:37.804923 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:43:37.804935 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:43:37.804947 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:43:37.804958 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:43:37.804970 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:37.804982 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:43:37.804996 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:43:37.805015 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:43:37.805028 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:43:37.805039 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:43:37.805051 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:37.805063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:37.805075 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:37.805086 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:43:37.805098 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:43:37.805110 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:43:37.805124 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:43:37.805141 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:37.805153 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:43:37.805164 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:43:37.805176 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:43:37.805189 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:43:37.805201 systemd[1]: Reached target machines.target - Containers. Nov 12 20:43:37.805212 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:43:37.805227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:43:37.805239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:43:37.805251 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:43:37.805262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:37.805274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:43:37.805286 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:43:37.805297 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:43:37.805314 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:43:37.805326 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:43:37.805341 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:43:37.805353 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:43:37.805364 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:43:37.805376 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:43:37.805388 kernel: fuse: init (API version 7.39) Nov 12 20:43:37.805405 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:43:37.805416 kernel: loop: module loaded Nov 12 20:43:37.805427 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:43:37.805439 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:43:37.805454 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:43:37.805482 systemd-journald[1135]: Collecting audit messages is disabled. Nov 12 20:43:37.805504 systemd-journald[1135]: Journal started Nov 12 20:43:37.805525 systemd-journald[1135]: Runtime Journal (/run/log/journal/97190dbdb4bc4942b0fceee8ea1448c1) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:43:37.582766 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:43:37.601672 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:43:37.602101 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:43:37.810639 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:43:37.810683 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:43:37.810704 systemd[1]: Stopped verity-setup.service. Nov 12 20:43:37.814659 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:37.814697 kernel: ACPI: bus type drm_connector registered Nov 12 20:43:37.818270 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:43:37.819173 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:43:37.820647 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:43:37.822124 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:43:37.823466 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:43:37.824967 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:43:37.826490 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:43:37.828013 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:43:37.829817 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:37.831729 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:43:37.831942 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:43:37.833837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:37.834054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:37.835815 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:43:37.836036 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:43:37.837854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:43:37.838075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:43:37.839951 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:43:37.840175 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:43:37.841908 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:43:37.842127 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:43:37.843845 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:37.845734 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:43:37.847595 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:43:37.863455 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:43:37.869802 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:43:37.872488 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:43:37.873895 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:43:37.873933 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:43:37.876488 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:43:37.879337 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:43:37.881977 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:43:37.883566 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:37.885468 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:43:37.889836 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:43:37.891880 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:43:37.895548 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:43:37.897349 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:43:37.902110 systemd-journald[1135]: Time spent on flushing to /var/log/journal/97190dbdb4bc4942b0fceee8ea1448c1 is 29.199ms for 992 entries. Nov 12 20:43:37.902110 systemd-journald[1135]: System Journal (/var/log/journal/97190dbdb4bc4942b0fceee8ea1448c1) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:43:37.942598 systemd-journald[1135]: Received client request to flush runtime journal. Nov 12 20:43:37.911122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:43:37.919308 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:43:37.922705 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:43:37.929966 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:37.931864 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:43:37.933560 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:43:37.935465 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:43:37.937755 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:43:37.944834 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:43:37.949645 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 20:43:37.952403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:37.956294 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:43:37.968474 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:43:37.968805 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:43:37.971151 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:43:37.973019 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:43:37.979611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:43:37.985408 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:43:37.996058 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:43:37.996883 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:43:38.002799 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 20:43:38.005959 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Nov 12 20:43:38.005977 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Nov 12 20:43:38.012380 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:38.039671 kernel: loop2: detected capacity change from 0 to 210664 Nov 12 20:43:38.069660 kernel: loop3: detected capacity change from 0 to 142488 Nov 12 20:43:38.081720 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:43:38.093660 kernel: loop5: detected capacity change from 0 to 210664 Nov 12 20:43:38.103168 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:43:38.103768 (sd-merge)[1201]: Merged extensions into '/usr'. Nov 12 20:43:38.107378 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:43:38.107394 systemd[1]: Reloading... Nov 12 20:43:38.156649 zram_generator::config[1226]: No configuration found. Nov 12 20:43:38.220183 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:43:38.284059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:43:38.333309 systemd[1]: Reloading finished in 225 ms. Nov 12 20:43:38.380975 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:43:38.382546 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:43:38.399844 systemd[1]: Starting ensure-sysext.service... Nov 12 20:43:38.402068 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:43:38.408242 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:43:38.408261 systemd[1]: Reloading... Nov 12 20:43:38.424521 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:43:38.424930 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:43:38.425933 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:43:38.426248 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Nov 12 20:43:38.426327 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Nov 12 20:43:38.429615 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:43:38.429642 systemd-tmpfiles[1265]: Skipping /boot Nov 12 20:43:38.442037 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:43:38.442053 systemd-tmpfiles[1265]: Skipping /boot Nov 12 20:43:38.465293 zram_generator::config[1295]: No configuration found. Nov 12 20:43:38.563150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:43:38.611953 systemd[1]: Reloading finished in 203 ms. Nov 12 20:43:38.630556 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:43:38.644197 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:38.654214 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:43:38.657116 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:43:38.659874 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:43:38.664505 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:43:38.667976 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:38.672969 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:43:38.678070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:38.678320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:43:38.680997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:38.687964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:43:38.691490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:43:38.692850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:38.696715 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:43:38.698077 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:38.699477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:38.699838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:38.702069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:43:38.702282 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:43:38.706561 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:43:38.706807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:43:38.715099 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:43:38.719117 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:43:38.724213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:38.724410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:43:38.728172 augenrules[1361]: No rules Nov 12 20:43:38.729724 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Nov 12 20:43:38.733273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:38.738352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:43:38.742197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:43:38.743514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:38.747574 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:43:38.749019 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:38.750194 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:43:38.752186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:38.752512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:38.754761 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:43:38.754928 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:43:38.757253 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:43:38.759841 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:43:38.760147 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:43:38.764341 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:43:38.774849 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:38.776845 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:43:38.789914 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:38.790125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:43:38.799912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:38.803794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:43:38.809734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:43:38.813793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:43:38.815839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:38.824847 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:43:38.825997 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:43:38.826036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:38.826948 systemd[1]: Finished ensure-sysext.service. Nov 12 20:43:38.828351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:38.828563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:38.830322 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:43:38.830541 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:43:38.833665 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1380) Nov 12 20:43:38.834272 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:43:38.834488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:43:38.836262 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:43:38.836475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:43:38.850794 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:43:38.852263 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:43:38.852344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:43:38.855646 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1382) Nov 12 20:43:38.857881 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:43:38.859316 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1380) Nov 12 20:43:38.865202 systemd-resolved[1335]: Positive Trust Anchors: Nov 12 20:43:38.865224 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:43:38.865264 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:43:39.499716 systemd-resolved[1335]: Defaulting to hostname 'linux'. Nov 12 20:43:39.504136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:43:39.507105 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:39.539318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:43:39.548825 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:43:39.550244 systemd-networkd[1404]: lo: Link UP Nov 12 20:43:39.550255 systemd-networkd[1404]: lo: Gained carrier Nov 12 20:43:39.552106 systemd-networkd[1404]: Enumeration completed Nov 12 20:43:39.552186 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:43:39.552502 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:43:39.552506 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:43:39.553439 systemd[1]: Reached target network.target - Network. Nov 12 20:43:39.553691 systemd-networkd[1404]: eth0: Link UP Nov 12 20:43:39.553696 systemd-networkd[1404]: eth0: Gained carrier Nov 12 20:43:39.553707 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:43:39.555690 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:43:39.562647 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 20:43:39.567641 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:43:39.572677 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:43:39.580216 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:43:39.580645 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 12 20:43:39.582745 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:43:39.604279 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 12 20:43:39.604557 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:43:39.604757 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:43:39.604931 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:43:39.606929 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:39.610726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:39.611543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:39.612671 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:43:39.614431 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Nov 12 20:43:39.615886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:39.617092 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:43:39.617149 systemd-timesyncd[1410]: Initial clock synchronization to Tue 2024-11-12 20:43:39.963325 UTC. Nov 12 20:43:39.621642 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:43:39.708710 kernel: kvm_amd: TSC scaling supported Nov 12 20:43:39.708798 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:43:39.708832 kernel: kvm_amd: Nested Paging enabled Nov 12 20:43:39.709905 kernel: kvm_amd: LBR virtualization supported Nov 12 20:43:39.709937 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:43:39.710687 kernel: kvm_amd: Virtual GIF supported Nov 12 20:43:39.731650 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:43:39.739928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:39.765220 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:43:39.792846 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:43:39.801434 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:43:39.831018 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:43:39.832690 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:39.833850 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:43:39.835067 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:43:39.836361 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:43:39.837883 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:43:39.839212 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:43:39.840532 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:43:39.841805 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:43:39.841838 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:43:39.842744 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:43:39.844233 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:43:39.847149 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:43:39.859643 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:43:39.861921 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:43:39.863875 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:43:39.865020 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:43:39.865986 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:43:39.866978 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:43:39.867009 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:43:39.868166 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:43:39.870445 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:43:39.871061 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:43:39.875874 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:43:39.878761 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:43:39.880725 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:43:39.882391 jq[1444]: false Nov 12 20:43:39.882223 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:43:39.886721 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:43:39.890102 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:43:39.892329 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:43:39.899471 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:43:39.900995 dbus-daemon[1443]: [system] SELinux support is enabled Nov 12 20:43:39.902094 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:43:39.902642 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:43:39.904463 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:43:39.908067 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:43:39.912415 extend-filesystems[1445]: Found loop3 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found loop4 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found loop5 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found sr0 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found vda Nov 12 20:43:39.915691 extend-filesystems[1445]: Found vda1 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found vda2 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found vda3 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found usr Nov 12 20:43:39.915691 extend-filesystems[1445]: Found vda4 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found vda6 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found vda7 Nov 12 20:43:39.915691 extend-filesystems[1445]: Found vda9 Nov 12 20:43:39.915691 extend-filesystems[1445]: Checking size of /dev/vda9 Nov 12 20:43:39.940497 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1396) Nov 12 20:43:39.912464 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:43:39.942709 extend-filesystems[1445]: Resized partition /dev/vda9 Nov 12 20:43:39.948897 jq[1460]: true Nov 12 20:43:39.917561 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:43:39.949185 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:43:39.931639 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:43:39.931855 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:43:39.933014 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:43:39.933242 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:43:39.936723 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:43:39.936939 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:43:39.956341 update_engine[1455]: I20241112 20:43:39.955022 1455 main.cc:92] Flatcar Update Engine starting Nov 12 20:43:39.964104 update_engine[1455]: I20241112 20:43:39.959398 1455 update_check_scheduler.cc:74] Next update check in 2m54s Nov 12 20:43:39.965649 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:43:39.970271 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:43:39.970904 jq[1469]: true Nov 12 20:43:39.982924 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:43:39.989241 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:43:39.991857 tar[1468]: linux-amd64/helm Nov 12 20:43:39.991120 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:43:39.991143 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:43:39.991668 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:43:39.991688 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:43:39.992671 systemd-logind[1451]: New seat seat0. Nov 12 20:43:39.993846 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:43:39.993873 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:43:40.001866 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:43:40.003242 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:43:40.010803 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:43:40.032476 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:43:40.042341 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:43:40.042341 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:43:40.042341 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:43:40.045912 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Nov 12 20:43:40.045309 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:43:40.045556 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:43:40.048747 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:43:40.061285 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:43:40.062684 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:43:40.065856 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:43:40.076029 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:43:40.092927 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:43:40.095068 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:48964.service - OpenSSH per-connection server daemon (10.0.0.1:48964). Nov 12 20:43:40.103005 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:43:40.103379 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:43:40.110917 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:43:40.131491 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:43:40.140574 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:43:40.149015 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:43:40.149465 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:43:40.174538 sshd[1520]: Accepted publickey for core from 10.0.0.1 port 48964 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:40.175894 sshd[1520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:40.186593 systemd-logind[1451]: New session 1 of user core. Nov 12 20:43:40.187927 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:43:40.195054 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:43:40.199075 containerd[1478]: time="2024-11-12T20:43:40.198596756Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:43:40.209860 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:43:40.218922 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:43:40.224149 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:43:40.228312 containerd[1478]: time="2024-11-12T20:43:40.228265153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:40.230023 containerd[1478]: time="2024-11-12T20:43:40.229977955Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230084738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230109637Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230297940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230322263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230389901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230402779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230618625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230634273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230647348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:40.230706 containerd[1478]: time="2024-11-12T20:43:40.230657551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:40.230977 containerd[1478]: time="2024-11-12T20:43:40.230775424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:40.231059 containerd[1478]: time="2024-11-12T20:43:40.231030383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:40.231193 containerd[1478]: time="2024-11-12T20:43:40.231166078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:40.231193 containerd[1478]: time="2024-11-12T20:43:40.231184632Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:43:40.231303 containerd[1478]: time="2024-11-12T20:43:40.231278685Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:43:40.231357 containerd[1478]: time="2024-11-12T20:43:40.231340282Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:43:40.236756 containerd[1478]: time="2024-11-12T20:43:40.236712699Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:43:40.236828 containerd[1478]: time="2024-11-12T20:43:40.236777380Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:43:40.236828 containerd[1478]: time="2024-11-12T20:43:40.236798180Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:43:40.236828 containerd[1478]: time="2024-11-12T20:43:40.236815939Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:43:40.236889 containerd[1478]: time="2024-11-12T20:43:40.236837378Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:43:40.237048 containerd[1478]: time="2024-11-12T20:43:40.237027865Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:43:40.237288 containerd[1478]: time="2024-11-12T20:43:40.237267334Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:43:40.237436 containerd[1478]: time="2024-11-12T20:43:40.237392274Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:43:40.237436 containerd[1478]: time="2024-11-12T20:43:40.237413179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:43:40.237436 containerd[1478]: time="2024-11-12T20:43:40.237426318Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:43:40.237515 containerd[1478]: time="2024-11-12T20:43:40.237440951Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:43:40.237515 containerd[1478]: time="2024-11-12T20:43:40.237454268Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:43:40.237515 containerd[1478]: time="2024-11-12T20:43:40.237466612Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:43:40.237515 containerd[1478]: time="2024-11-12T20:43:40.237480640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:43:40.237515 containerd[1478]: time="2024-11-12T20:43:40.237496727Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:43:40.237515 containerd[1478]: time="2024-11-12T20:43:40.237510556Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:43:40.237638 containerd[1478]: time="2024-11-12T20:43:40.237523444Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:43:40.237638 containerd[1478]: time="2024-11-12T20:43:40.237535558Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:43:40.237638 containerd[1478]: time="2024-11-12T20:43:40.237555417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237638 containerd[1478]: time="2024-11-12T20:43:40.237568765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237638 containerd[1478]: time="2024-11-12T20:43:40.237581016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237638 containerd[1478]: time="2024-11-12T20:43:40.237598462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237638 containerd[1478]: time="2024-11-12T20:43:40.237610910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237638 containerd[1478]: time="2024-11-12T20:43:40.237632192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237802 containerd[1478]: time="2024-11-12T20:43:40.237644860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237802 containerd[1478]: time="2024-11-12T20:43:40.237658919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237802 containerd[1478]: time="2024-11-12T20:43:40.237687559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237802 containerd[1478]: time="2024-11-12T20:43:40.237702182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237802 containerd[1478]: time="2024-11-12T20:43:40.237713304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237802 containerd[1478]: time="2024-11-12T20:43:40.237727195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237802 containerd[1478]: time="2024-11-12T20:43:40.237739666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237802 containerd[1478]: time="2024-11-12T20:43:40.237754570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:43:40.237802 containerd[1478]: time="2024-11-12T20:43:40.237791186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237808893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237823903Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237880326Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237899423Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237910136Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237921530Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237930885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237942749Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237952522Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:43:40.237970 containerd[1478]: time="2024-11-12T20:43:40.237962765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:43:40.238359 containerd[1478]: time="2024-11-12T20:43:40.238223463Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:43:40.238359 containerd[1478]: time="2024-11-12T20:43:40.238297217Z" level=info msg="Connect containerd service" Nov 12 20:43:40.238359 containerd[1478]: time="2024-11-12T20:43:40.238332117Z" level=info msg="using legacy CRI server" Nov 12 20:43:40.238359 containerd[1478]: time="2024-11-12T20:43:40.238340250Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:43:40.239392 containerd[1478]: time="2024-11-12T20:43:40.239359826Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:43:40.240179 containerd[1478]: time="2024-11-12T20:43:40.240084252Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:43:40.240261 containerd[1478]: time="2024-11-12T20:43:40.240219362Z" level=info msg="Start subscribing containerd event" Nov 12 20:43:40.240286 containerd[1478]: time="2024-11-12T20:43:40.240279444Z" level=info msg="Start recovering state" Nov 12 20:43:40.240357 containerd[1478]: time="2024-11-12T20:43:40.240337331Z" level=info msg="Start event monitor" Nov 12 20:43:40.240382 containerd[1478]: time="2024-11-12T20:43:40.240364591Z" level=info msg="Start snapshots syncer" Nov 12 20:43:40.240382 containerd[1478]: time="2024-11-12T20:43:40.240374813Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:43:40.240432 containerd[1478]: time="2024-11-12T20:43:40.240382434Z" level=info msg="Start streaming server" Nov 12 20:43:40.240825 containerd[1478]: time="2024-11-12T20:43:40.240787795Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:43:40.242897 containerd[1478]: time="2024-11-12T20:43:40.240855559Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:43:40.242897 containerd[1478]: time="2024-11-12T20:43:40.240923375Z" level=info msg="containerd successfully booted in 0.044423s" Nov 12 20:43:40.241910 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:43:40.336963 systemd[1533]: Queued start job for default target default.target. Nov 12 20:43:40.344042 systemd[1533]: Created slice app.slice - User Application Slice. Nov 12 20:43:40.344067 systemd[1533]: Reached target paths.target - Paths. Nov 12 20:43:40.344080 systemd[1533]: Reached target timers.target - Timers. Nov 12 20:43:40.345683 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:43:40.358294 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:43:40.358443 systemd[1533]: Reached target sockets.target - Sockets. Nov 12 20:43:40.358460 systemd[1533]: Reached target basic.target - Basic System. Nov 12 20:43:40.358506 systemd[1533]: Reached target default.target - Main User Target. Nov 12 20:43:40.358545 systemd[1533]: Startup finished in 126ms. Nov 12 20:43:40.358732 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:43:40.361435 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:43:40.419430 tar[1468]: linux-amd64/LICENSE Nov 12 20:43:40.419515 tar[1468]: linux-amd64/README.md Nov 12 20:43:40.438938 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:48998.service - OpenSSH per-connection server daemon (10.0.0.1:48998). Nov 12 20:43:40.446808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:43:40.476497 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 48998 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:40.478044 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:40.481991 systemd-logind[1451]: New session 2 of user core. Nov 12 20:43:40.491768 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:43:40.548167 sshd[1548]: pam_unix(sshd:session): session closed for user core Nov 12 20:43:40.568326 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:48998.service: Deactivated successfully. Nov 12 20:43:40.571056 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:43:40.573239 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:43:40.586000 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:49000.service - OpenSSH per-connection server daemon (10.0.0.1:49000). Nov 12 20:43:40.588389 systemd-logind[1451]: Removed session 2. Nov 12 20:43:40.618260 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 49000 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:40.620364 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:40.624810 systemd-logind[1451]: New session 3 of user core. Nov 12 20:43:40.636800 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:43:40.693896 sshd[1556]: pam_unix(sshd:session): session closed for user core Nov 12 20:43:40.697732 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:49000.service: Deactivated successfully. Nov 12 20:43:40.699627 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:43:40.700318 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:43:40.701229 systemd-logind[1451]: Removed session 3. Nov 12 20:43:41.548505 systemd-networkd[1404]: eth0: Gained IPv6LL Nov 12 20:43:41.552252 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:43:41.554396 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:43:41.565061 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:43:41.568206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:43:41.571038 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:43:41.600049 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:43:41.601945 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:43:41.602161 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:43:41.604623 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:43:43.058199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:43:43.099987 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:43:43.100399 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:43:43.102105 systemd[1]: Startup finished in 762ms (kernel) + 5.323s (initrd) + 6.089s (userspace) = 12.175s. Nov 12 20:43:44.099270 kubelet[1584]: E1112 20:43:44.099193 1584 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:43:44.103980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:43:44.104197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:43:44.104599 systemd[1]: kubelet.service: Consumed 2.289s CPU time. Nov 12 20:43:50.936767 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:55052.service - OpenSSH per-connection server daemon (10.0.0.1:55052). Nov 12 20:43:50.980441 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 55052 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:50.982417 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:50.987572 systemd-logind[1451]: New session 4 of user core. Nov 12 20:43:50.994857 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:43:51.053410 sshd[1599]: pam_unix(sshd:session): session closed for user core Nov 12 20:43:51.069756 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:55052.service: Deactivated successfully. Nov 12 20:43:51.071550 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:43:51.073175 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:43:51.074376 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:55068.service - OpenSSH per-connection server daemon (10.0.0.1:55068). Nov 12 20:43:51.075224 systemd-logind[1451]: Removed session 4. Nov 12 20:43:51.114863 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 55068 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:51.116374 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:51.120584 systemd-logind[1451]: New session 5 of user core. Nov 12 20:43:51.129786 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:43:51.180361 sshd[1606]: pam_unix(sshd:session): session closed for user core Nov 12 20:43:51.193451 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:55068.service: Deactivated successfully. Nov 12 20:43:51.195252 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:43:51.196965 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:43:51.206879 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:55074.service - OpenSSH per-connection server daemon (10.0.0.1:55074). Nov 12 20:43:51.207990 systemd-logind[1451]: Removed session 5. Nov 12 20:43:51.243766 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 55074 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:51.245661 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:51.250475 systemd-logind[1451]: New session 6 of user core. Nov 12 20:43:51.260815 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:43:51.319501 sshd[1613]: pam_unix(sshd:session): session closed for user core Nov 12 20:43:51.326623 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:55074.service: Deactivated successfully. Nov 12 20:43:51.328436 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:43:51.330007 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:43:51.331243 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:55082.service - OpenSSH per-connection server daemon (10.0.0.1:55082). Nov 12 20:43:51.332120 systemd-logind[1451]: Removed session 6. Nov 12 20:43:51.386503 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 55082 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:51.388461 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:51.395697 systemd-logind[1451]: New session 7 of user core. Nov 12 20:43:51.404807 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:43:51.466418 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:43:51.466872 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:43:51.483330 sudo[1624]: pam_unix(sudo:session): session closed for user root Nov 12 20:43:51.485795 sshd[1621]: pam_unix(sshd:session): session closed for user core Nov 12 20:43:51.496471 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:55082.service: Deactivated successfully. Nov 12 20:43:51.498964 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:43:51.501209 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:43:51.519040 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:55086.service - OpenSSH per-connection server daemon (10.0.0.1:55086). Nov 12 20:43:51.520474 systemd-logind[1451]: Removed session 7. Nov 12 20:43:51.557800 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 55086 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:51.559662 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:51.564407 systemd-logind[1451]: New session 8 of user core. Nov 12 20:43:51.574920 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:43:51.630563 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:43:51.630933 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:43:51.635146 sudo[1633]: pam_unix(sudo:session): session closed for user root Nov 12 20:43:51.641622 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:43:51.642027 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:43:51.664919 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:43:51.666598 auditctl[1636]: No rules Nov 12 20:43:51.667103 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:43:51.667403 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:43:51.670785 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:43:51.702416 augenrules[1654]: No rules Nov 12 20:43:51.704716 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:43:51.706117 sudo[1632]: pam_unix(sudo:session): session closed for user root Nov 12 20:43:51.708149 sshd[1629]: pam_unix(sshd:session): session closed for user core Nov 12 20:43:51.726799 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:55086.service: Deactivated successfully. Nov 12 20:43:51.728915 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:43:51.730597 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:43:51.743983 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:55100.service - OpenSSH per-connection server daemon (10.0.0.1:55100). Nov 12 20:43:51.745073 systemd-logind[1451]: Removed session 8. Nov 12 20:43:51.778663 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 55100 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:51.780327 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:51.784997 systemd-logind[1451]: New session 9 of user core. Nov 12 20:43:51.798921 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:43:51.854282 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:43:51.854607 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:43:52.451111 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:43:52.452210 (dockerd)[1683]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:43:53.173210 dockerd[1683]: time="2024-11-12T20:43:53.173112194Z" level=info msg="Starting up" Nov 12 20:43:53.926595 dockerd[1683]: time="2024-11-12T20:43:53.926538181Z" level=info msg="Loading containers: start." Nov 12 20:43:54.183668 kernel: Initializing XFRM netlink socket Nov 12 20:43:54.218451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:43:54.239435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:43:54.277962 systemd-networkd[1404]: docker0: Link UP Nov 12 20:43:54.508790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:43:54.513763 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:43:55.180485 kubelet[1792]: E1112 20:43:55.180416 1792 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:43:55.188182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:43:55.188445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:43:55.693517 dockerd[1683]: time="2024-11-12T20:43:55.693457990Z" level=info msg="Loading containers: done." Nov 12 20:43:55.711447 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2441653518-merged.mount: Deactivated successfully. Nov 12 20:43:55.886550 dockerd[1683]: time="2024-11-12T20:43:55.886354779Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:43:55.886871 dockerd[1683]: time="2024-11-12T20:43:55.886645712Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:43:55.886871 dockerd[1683]: time="2024-11-12T20:43:55.886815818Z" level=info msg="Daemon has completed initialization" Nov 12 20:43:56.001350 dockerd[1683]: time="2024-11-12T20:43:56.001117545Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:43:56.001697 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:43:57.199245 containerd[1478]: time="2024-11-12T20:43:57.199204953Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 12 20:44:00.493717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106424333.mount: Deactivated successfully. Nov 12 20:44:05.438668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:44:05.447935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:05.591103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:05.598459 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:44:05.960065 kubelet[1884]: E1112 20:44:05.959982 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:44:05.964499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:44:05.964756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:44:07.991365 containerd[1478]: time="2024-11-12T20:44:07.991306324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:07.992318 containerd[1478]: time="2024-11-12T20:44:07.992240413Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=32676443" Nov 12 20:44:07.993269 containerd[1478]: time="2024-11-12T20:44:07.993231396Z" level=info msg="ImageCreate event name:\"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:07.996303 containerd[1478]: time="2024-11-12T20:44:07.996270735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:07.997608 containerd[1478]: time="2024-11-12T20:44:07.997579527Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"32673243\" in 10.798275447s" Nov 12 20:44:07.997674 containerd[1478]: time="2024-11-12T20:44:07.997611595Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\"" Nov 12 20:44:08.031322 containerd[1478]: time="2024-11-12T20:44:08.031283361Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 12 20:44:10.852186 containerd[1478]: time="2024-11-12T20:44:10.852100077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:10.853778 containerd[1478]: time="2024-11-12T20:44:10.853727431Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=29605796" Nov 12 20:44:10.856117 containerd[1478]: time="2024-11-12T20:44:10.856039668Z" level=info msg="ImageCreate event name:\"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:10.862485 containerd[1478]: time="2024-11-12T20:44:10.862393759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:10.863679 containerd[1478]: time="2024-11-12T20:44:10.863635034Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"31051162\" in 2.832295888s" Nov 12 20:44:10.863742 containerd[1478]: time="2024-11-12T20:44:10.863676755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\"" Nov 12 20:44:10.891494 containerd[1478]: time="2024-11-12T20:44:10.891444982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 12 20:44:14.036166 containerd[1478]: time="2024-11-12T20:44:14.036063570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:14.047409 containerd[1478]: time="2024-11-12T20:44:14.047323614Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=17784244" Nov 12 20:44:14.071203 containerd[1478]: time="2024-11-12T20:44:14.071127132Z" level=info msg="ImageCreate event name:\"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:14.102067 containerd[1478]: time="2024-11-12T20:44:14.102020312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:14.103237 containerd[1478]: time="2024-11-12T20:44:14.103192928Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"19229628\" in 3.211690472s" Nov 12 20:44:14.103237 containerd[1478]: time="2024-11-12T20:44:14.103231261Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\"" Nov 12 20:44:14.126001 containerd[1478]: time="2024-11-12T20:44:14.125938870Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 12 20:44:15.968721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:44:15.976973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:16.137436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:16.143952 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:44:16.218446 kubelet[1968]: E1112 20:44:16.218404 1968 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:44:16.223330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:44:16.223604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:44:16.313561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508592773.mount: Deactivated successfully. Nov 12 20:44:18.381315 containerd[1478]: time="2024-11-12T20:44:18.381240812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:18.388044 containerd[1478]: time="2024-11-12T20:44:18.387983554Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=29054624" Nov 12 20:44:18.393540 containerd[1478]: time="2024-11-12T20:44:18.393507034Z" level=info msg="ImageCreate event name:\"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:18.397648 containerd[1478]: time="2024-11-12T20:44:18.397610711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:18.398380 containerd[1478]: time="2024-11-12T20:44:18.398329119Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"29053643\" in 4.272348528s" Nov 12 20:44:18.398429 containerd[1478]: time="2024-11-12T20:44:18.398379667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\"" Nov 12 20:44:18.456093 containerd[1478]: time="2024-11-12T20:44:18.456042840Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:44:20.178753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327150484.mount: Deactivated successfully. Nov 12 20:44:23.296477 containerd[1478]: time="2024-11-12T20:44:23.296410238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:23.305803 containerd[1478]: time="2024-11-12T20:44:23.305736989Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:44:23.311071 containerd[1478]: time="2024-11-12T20:44:23.311030483Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:23.320125 containerd[1478]: time="2024-11-12T20:44:23.320044492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:23.321499 containerd[1478]: time="2024-11-12T20:44:23.321442515Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 4.865350192s" Nov 12 20:44:23.321499 containerd[1478]: time="2024-11-12T20:44:23.321484285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:44:23.351252 containerd[1478]: time="2024-11-12T20:44:23.351190182Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:44:24.376478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849666914.mount: Deactivated successfully. Nov 12 20:44:24.382756 containerd[1478]: time="2024-11-12T20:44:24.382714832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:24.383426 containerd[1478]: time="2024-11-12T20:44:24.383368108Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:44:24.384528 containerd[1478]: time="2024-11-12T20:44:24.384497936Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:24.386614 containerd[1478]: time="2024-11-12T20:44:24.386574852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:24.387304 containerd[1478]: time="2024-11-12T20:44:24.387258058Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.036007064s" Nov 12 20:44:24.387343 containerd[1478]: time="2024-11-12T20:44:24.387304467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:44:24.410228 containerd[1478]: time="2024-11-12T20:44:24.410178544Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 12 20:44:24.985083 update_engine[1455]: I20241112 20:44:24.984851 1455 update_attempter.cc:509] Updating boot flags... Nov 12 20:44:25.016107 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2056) Nov 12 20:44:25.059535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759176807.mount: Deactivated successfully. Nov 12 20:44:25.063676 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2059) Nov 12 20:44:26.258505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 12 20:44:26.275031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:26.439429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:26.443842 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:44:26.604262 kubelet[2115]: E1112 20:44:26.604114 2115 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:44:26.610151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:44:26.610983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:44:29.544463 containerd[1478]: time="2024-11-12T20:44:29.544403892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:29.588571 containerd[1478]: time="2024-11-12T20:44:29.588482480Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Nov 12 20:44:29.623512 containerd[1478]: time="2024-11-12T20:44:29.623454971Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:29.629078 containerd[1478]: time="2024-11-12T20:44:29.629019635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:29.630210 containerd[1478]: time="2024-11-12T20:44:29.630147933Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.219930789s" Nov 12 20:44:29.630210 containerd[1478]: time="2024-11-12T20:44:29.630193045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Nov 12 20:44:32.393567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:32.407994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:32.426218 systemd[1]: Reloading requested from client PID 2211 ('systemctl') (unit session-9.scope)... Nov 12 20:44:32.426236 systemd[1]: Reloading... Nov 12 20:44:32.556716 zram_generator::config[2254]: No configuration found. Nov 12 20:44:32.852514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:44:32.938710 systemd[1]: Reloading finished in 511 ms. Nov 12 20:44:32.997967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:33.001126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:33.003122 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:44:33.003377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:33.005151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:33.147536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:33.152859 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:44:33.192053 kubelet[2300]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:44:33.192053 kubelet[2300]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:44:33.192053 kubelet[2300]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:44:33.192456 kubelet[2300]: I1112 20:44:33.192103 2300 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:44:33.455915 kubelet[2300]: I1112 20:44:33.455774 2300 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 20:44:33.455915 kubelet[2300]: I1112 20:44:33.455805 2300 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:44:33.456074 kubelet[2300]: I1112 20:44:33.456027 2300 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 20:44:33.512928 kubelet[2300]: I1112 20:44:33.512852 2300 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:44:33.522902 kubelet[2300]: E1112 20:44:33.522873 2300 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:33.547251 kubelet[2300]: I1112 20:44:33.547210 2300 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:44:33.547484 kubelet[2300]: I1112 20:44:33.547441 2300 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:44:33.547674 kubelet[2300]: I1112 20:44:33.547473 2300 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:44:33.547793 kubelet[2300]: I1112 20:44:33.547686 2300 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:44:33.547793 kubelet[2300]: I1112 20:44:33.547695 2300 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:44:33.547879 kubelet[2300]: I1112 20:44:33.547860 2300 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:44:33.558386 kubelet[2300]: I1112 20:44:33.558347 2300 kubelet.go:400] "Attempting to sync node with API server" Nov 12 20:44:33.558386 kubelet[2300]: I1112 20:44:33.558369 2300 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:44:33.558454 kubelet[2300]: I1112 20:44:33.558399 2300 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:44:33.558454 kubelet[2300]: I1112 20:44:33.558419 2300 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:44:33.559705 kubelet[2300]: W1112 20:44:33.559603 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:33.559705 kubelet[2300]: E1112 20:44:33.559708 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:33.560136 kubelet[2300]: W1112 20:44:33.560073 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:33.560185 kubelet[2300]: E1112 20:44:33.560142 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:33.587223 kubelet[2300]: I1112 20:44:33.587163 2300 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:44:33.606945 kubelet[2300]: I1112 20:44:33.606906 2300 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:44:33.607047 kubelet[2300]: W1112 20:44:33.606995 2300 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:44:33.607994 kubelet[2300]: I1112 20:44:33.607962 2300 server.go:1264] "Started kubelet" Nov 12 20:44:33.608115 kubelet[2300]: I1112 20:44:33.608043 2300 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:44:33.609165 kubelet[2300]: I1112 20:44:33.609141 2300 server.go:455] "Adding debug handlers to kubelet server" Nov 12 20:44:33.609201 kubelet[2300]: I1112 20:44:33.609194 2300 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:44:33.610045 kubelet[2300]: I1112 20:44:33.609989 2300 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:44:33.610224 kubelet[2300]: I1112 20:44:33.610206 2300 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:44:33.614418 kubelet[2300]: E1112 20:44:33.614396 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:33.614472 kubelet[2300]: I1112 20:44:33.614445 2300 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:44:33.614583 kubelet[2300]: I1112 20:44:33.614558 2300 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 20:44:33.614662 kubelet[2300]: I1112 20:44:33.614649 2300 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:44:33.615043 kubelet[2300]: W1112 20:44:33.615000 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:33.615073 kubelet[2300]: E1112 20:44:33.615044 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:33.616349 kubelet[2300]: E1112 20:44:33.616282 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Nov 12 20:44:33.616469 kubelet[2300]: E1112 20:44:33.616381 2300 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:44:33.617451 kubelet[2300]: I1112 20:44:33.617433 2300 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:44:33.617451 kubelet[2300]: I1112 20:44:33.617447 2300 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:44:33.617690 kubelet[2300]: I1112 20:44:33.617498 2300 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:44:33.648642 kubelet[2300]: I1112 20:44:33.648317 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:44:33.650147 kubelet[2300]: E1112 20:44:33.650047 2300 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18075364acc8ea29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:44:33.607928361 +0000 UTC m=+0.451266771,LastTimestamp:2024-11-12 20:44:33.607928361 +0000 UTC m=+0.451266771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:44:33.650242 kubelet[2300]: I1112 20:44:33.650224 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:44:33.650270 kubelet[2300]: I1112 20:44:33.650260 2300 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:44:33.650289 kubelet[2300]: I1112 20:44:33.650283 2300 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 20:44:33.651463 kubelet[2300]: E1112 20:44:33.650338 2300 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:44:33.652160 kubelet[2300]: W1112 20:44:33.652119 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:33.652205 kubelet[2300]: E1112 20:44:33.652163 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:33.655481 kubelet[2300]: I1112 20:44:33.655452 2300 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:44:33.655481 kubelet[2300]: I1112 20:44:33.655481 2300 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:44:33.655481 kubelet[2300]: I1112 20:44:33.655502 2300 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:44:33.716398 kubelet[2300]: I1112 20:44:33.716290 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:33.716615 kubelet[2300]: E1112 20:44:33.716591 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 12 20:44:33.750951 kubelet[2300]: E1112 20:44:33.750870 2300 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:44:33.817763 kubelet[2300]: E1112 20:44:33.817705 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Nov 12 20:44:33.918067 kubelet[2300]: I1112 20:44:33.918029 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:33.918402 kubelet[2300]: E1112 20:44:33.918375 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 12 20:44:33.951481 kubelet[2300]: E1112 20:44:33.951459 2300 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:44:34.218586 kubelet[2300]: E1112 20:44:34.218527 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Nov 12 20:44:34.320106 kubelet[2300]: I1112 20:44:34.320071 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:34.320419 kubelet[2300]: E1112 20:44:34.320397 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 12 20:44:34.352579 kubelet[2300]: E1112 20:44:34.352521 2300 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:44:34.695615 kubelet[2300]: W1112 20:44:34.695526 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:34.695615 kubelet[2300]: E1112 20:44:34.695611 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:34.755719 kubelet[2300]: W1112 20:44:34.755602 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:34.755719 kubelet[2300]: E1112 20:44:34.755711 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:34.928373 kubelet[2300]: W1112 20:44:34.928312 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:34.928373 kubelet[2300]: E1112 20:44:34.928377 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:35.019206 kubelet[2300]: E1112 20:44:35.019149 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="1.6s" Nov 12 20:44:35.115310 kubelet[2300]: W1112 20:44:35.115196 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:35.115310 kubelet[2300]: E1112 20:44:35.115264 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:35.122726 kubelet[2300]: I1112 20:44:35.122683 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:35.123130 kubelet[2300]: E1112 20:44:35.123085 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 12 20:44:35.153408 kubelet[2300]: E1112 20:44:35.153354 2300 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:44:35.716952 kubelet[2300]: E1112 20:44:35.716899 2300 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:35.987452 kubelet[2300]: I1112 20:44:35.987291 2300 policy_none.go:49] "None policy: Start" Nov 12 20:44:35.988239 kubelet[2300]: I1112 20:44:35.988199 2300 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:44:35.988300 kubelet[2300]: I1112 20:44:35.988245 2300 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:44:36.043937 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:44:36.059748 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:44:36.063158 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:44:36.073847 kubelet[2300]: I1112 20:44:36.073717 2300 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:44:36.074047 kubelet[2300]: I1112 20:44:36.073980 2300 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:44:36.074167 kubelet[2300]: I1112 20:44:36.074159 2300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:44:36.075274 kubelet[2300]: E1112 20:44:36.075252 2300 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:44:36.620101 kubelet[2300]: E1112 20:44:36.620040 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="3.2s" Nov 12 20:44:36.725115 kubelet[2300]: I1112 20:44:36.725080 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:36.725484 kubelet[2300]: E1112 20:44:36.725414 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 12 20:44:36.750026 kubelet[2300]: W1112 20:44:36.749987 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:36.750026 kubelet[2300]: E1112 20:44:36.750022 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:36.754144 kubelet[2300]: I1112 20:44:36.754106 2300 topology_manager.go:215] "Topology Admit Handler" podUID="c38721aa07c6c22ac7bea9feb0c13a62" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:44:36.755108 kubelet[2300]: I1112 20:44:36.755073 2300 topology_manager.go:215] "Topology Admit Handler" podUID="35a50a3f0f14abbdd3fae477f39e6e18" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:44:36.755990 kubelet[2300]: I1112 20:44:36.755961 2300 topology_manager.go:215] "Topology Admit Handler" podUID="c95384ce7f39fb5cff38cd36dacf8a69" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:44:36.761677 systemd[1]: Created slice kubepods-burstable-podc38721aa07c6c22ac7bea9feb0c13a62.slice - libcontainer container kubepods-burstable-podc38721aa07c6c22ac7bea9feb0c13a62.slice. Nov 12 20:44:36.773827 systemd[1]: Created slice kubepods-burstable-pod35a50a3f0f14abbdd3fae477f39e6e18.slice - libcontainer container kubepods-burstable-pod35a50a3f0f14abbdd3fae477f39e6e18.slice. Nov 12 20:44:36.787586 systemd[1]: Created slice kubepods-burstable-podc95384ce7f39fb5cff38cd36dacf8a69.slice - libcontainer container kubepods-burstable-podc95384ce7f39fb5cff38cd36dacf8a69.slice. Nov 12 20:44:36.836698 kubelet[2300]: I1112 20:44:36.836615 2300 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c38721aa07c6c22ac7bea9feb0c13a62-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c38721aa07c6c22ac7bea9feb0c13a62\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:36.836698 kubelet[2300]: I1112 20:44:36.836688 2300 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c38721aa07c6c22ac7bea9feb0c13a62-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c38721aa07c6c22ac7bea9feb0c13a62\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:36.836865 kubelet[2300]: I1112 20:44:36.836713 2300 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:36.836865 kubelet[2300]: I1112 20:44:36.836728 2300 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:36.836865 kubelet[2300]: I1112 20:44:36.836766 2300 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c95384ce7f39fb5cff38cd36dacf8a69-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c95384ce7f39fb5cff38cd36dacf8a69\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:44:36.836865 kubelet[2300]: I1112 20:44:36.836822 2300 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c38721aa07c6c22ac7bea9feb0c13a62-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c38721aa07c6c22ac7bea9feb0c13a62\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:36.836865 kubelet[2300]: I1112 20:44:36.836859 2300 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:36.836977 kubelet[2300]: I1112 20:44:36.836883 2300 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:36.836977 kubelet[2300]: I1112 20:44:36.836901 2300 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:36.850190 kubelet[2300]: W1112 20:44:36.850161 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:36.850190 kubelet[2300]: E1112 20:44:36.850195 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:37.072500 kubelet[2300]: E1112 20:44:37.072444 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:37.073262 containerd[1478]: time="2024-11-12T20:44:37.073218273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c38721aa07c6c22ac7bea9feb0c13a62,Namespace:kube-system,Attempt:0,}" Nov 12 20:44:37.085589 kubelet[2300]: E1112 20:44:37.085547 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:37.086002 containerd[1478]: time="2024-11-12T20:44:37.085897005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:35a50a3f0f14abbdd3fae477f39e6e18,Namespace:kube-system,Attempt:0,}" Nov 12 20:44:37.093341 kubelet[2300]: E1112 20:44:37.093292 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:37.093600 containerd[1478]: time="2024-11-12T20:44:37.093565705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c95384ce7f39fb5cff38cd36dacf8a69,Namespace:kube-system,Attempt:0,}" Nov 12 20:44:37.709550 kubelet[2300]: W1112 20:44:37.709509 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:37.709550 kubelet[2300]: E1112 20:44:37.709548 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:37.969521 kubelet[2300]: W1112 20:44:37.969401 2300 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:37.969521 kubelet[2300]: E1112 20:44:37.969448 2300 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Nov 12 20:44:38.121965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251057592.mount: Deactivated successfully. Nov 12 20:44:38.128817 containerd[1478]: time="2024-11-12T20:44:38.128774749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:38.130809 containerd[1478]: time="2024-11-12T20:44:38.130752531Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:38.131692 containerd[1478]: time="2024-11-12T20:44:38.131654211Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:44:38.133092 containerd[1478]: time="2024-11-12T20:44:38.132994075Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:38.133905 containerd[1478]: time="2024-11-12T20:44:38.133835407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:44:38.134849 containerd[1478]: time="2024-11-12T20:44:38.134801344Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:38.135917 containerd[1478]: time="2024-11-12T20:44:38.135881816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:44:38.136906 containerd[1478]: time="2024-11-12T20:44:38.136877857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:38.137691 containerd[1478]: time="2024-11-12T20:44:38.137658249Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.064361747s" Nov 12 20:44:38.140926 containerd[1478]: time="2024-11-12T20:44:38.140890504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.054944864s" Nov 12 20:44:38.141601 containerd[1478]: time="2024-11-12T20:44:38.141549116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.047938694s" Nov 12 20:44:38.283878 containerd[1478]: time="2024-11-12T20:44:38.283663906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:44:38.284701 containerd[1478]: time="2024-11-12T20:44:38.284599628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:44:38.284701 containerd[1478]: time="2024-11-12T20:44:38.284665037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:44:38.285840 containerd[1478]: time="2024-11-12T20:44:38.283808323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:44:38.285840 containerd[1478]: time="2024-11-12T20:44:38.285395584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:38.285840 containerd[1478]: time="2024-11-12T20:44:38.285479462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:38.285840 containerd[1478]: time="2024-11-12T20:44:38.285158057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:44:38.285840 containerd[1478]: time="2024-11-12T20:44:38.285209867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:44:38.285840 containerd[1478]: time="2024-11-12T20:44:38.285223928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:38.285840 containerd[1478]: time="2024-11-12T20:44:38.285286962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:38.285840 containerd[1478]: time="2024-11-12T20:44:38.284799223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:38.285840 containerd[1478]: time="2024-11-12T20:44:38.284898334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:38.313781 systemd[1]: Started cri-containerd-1b26815e1058654cfb4df0f55f370f953482c401bf7b37bf9f09098c22e30ddf.scope - libcontainer container 1b26815e1058654cfb4df0f55f370f953482c401bf7b37bf9f09098c22e30ddf. Nov 12 20:44:38.315263 systemd[1]: Started cri-containerd-2f6ee1a3fa752d7440d70f520234e6402849ce90ffa1209beaa31e33b10f1a63.scope - libcontainer container 2f6ee1a3fa752d7440d70f520234e6402849ce90ffa1209beaa31e33b10f1a63. Nov 12 20:44:38.319747 systemd[1]: Started cri-containerd-386dcc86183ce05948a23359931ee6c909d59d000530f9e291614655ee57aa3d.scope - libcontainer container 386dcc86183ce05948a23359931ee6c909d59d000530f9e291614655ee57aa3d. Nov 12 20:44:38.357428 containerd[1478]: time="2024-11-12T20:44:38.357256584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c38721aa07c6c22ac7bea9feb0c13a62,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b26815e1058654cfb4df0f55f370f953482c401bf7b37bf9f09098c22e30ddf\"" Nov 12 20:44:38.358875 kubelet[2300]: E1112 20:44:38.358804 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:38.362121 containerd[1478]: time="2024-11-12T20:44:38.362069345Z" level=info msg="CreateContainer within sandbox \"1b26815e1058654cfb4df0f55f370f953482c401bf7b37bf9f09098c22e30ddf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:44:38.367156 containerd[1478]: time="2024-11-12T20:44:38.367092674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c95384ce7f39fb5cff38cd36dacf8a69,Namespace:kube-system,Attempt:0,} returns sandbox id \"386dcc86183ce05948a23359931ee6c909d59d000530f9e291614655ee57aa3d\"" Nov 12 20:44:38.369124 kubelet[2300]: E1112 20:44:38.369097 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:38.369556 containerd[1478]: time="2024-11-12T20:44:38.369478846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:35a50a3f0f14abbdd3fae477f39e6e18,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f6ee1a3fa752d7440d70f520234e6402849ce90ffa1209beaa31e33b10f1a63\"" Nov 12 20:44:38.370940 kubelet[2300]: E1112 20:44:38.370916 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:38.372271 containerd[1478]: time="2024-11-12T20:44:38.372236279Z" level=info msg="CreateContainer within sandbox \"386dcc86183ce05948a23359931ee6c909d59d000530f9e291614655ee57aa3d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:44:38.372929 containerd[1478]: time="2024-11-12T20:44:38.372895212Z" level=info msg="CreateContainer within sandbox \"2f6ee1a3fa752d7440d70f520234e6402849ce90ffa1209beaa31e33b10f1a63\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:44:38.395104 containerd[1478]: time="2024-11-12T20:44:38.394952661Z" level=info msg="CreateContainer within sandbox \"1b26815e1058654cfb4df0f55f370f953482c401bf7b37bf9f09098c22e30ddf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f8e9fd10ddf6f9f595026797475adc0aa6d4a964c838e5a4b0e6f1408468ddd\"" Nov 12 20:44:38.395769 containerd[1478]: time="2024-11-12T20:44:38.395637739Z" level=info msg="StartContainer for \"8f8e9fd10ddf6f9f595026797475adc0aa6d4a964c838e5a4b0e6f1408468ddd\"" Nov 12 20:44:38.403866 containerd[1478]: time="2024-11-12T20:44:38.403659875Z" level=info msg="CreateContainer within sandbox \"2f6ee1a3fa752d7440d70f520234e6402849ce90ffa1209beaa31e33b10f1a63\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b8d65cb3e16714775c1094d43a0a972576cb1e6ec96eeb8fe6c45a8d89d95b7a\"" Nov 12 20:44:38.404390 containerd[1478]: time="2024-11-12T20:44:38.404369968Z" level=info msg="StartContainer for \"b8d65cb3e16714775c1094d43a0a972576cb1e6ec96eeb8fe6c45a8d89d95b7a\"" Nov 12 20:44:38.407320 containerd[1478]: time="2024-11-12T20:44:38.407194123Z" level=info msg="CreateContainer within sandbox \"386dcc86183ce05948a23359931ee6c909d59d000530f9e291614655ee57aa3d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"74bce8abae8de4f9fd11ba7e18c506deddb10417a74aa71e4ba289f94050b762\"" Nov 12 20:44:38.407777 containerd[1478]: time="2024-11-12T20:44:38.407758424Z" level=info msg="StartContainer for \"74bce8abae8de4f9fd11ba7e18c506deddb10417a74aa71e4ba289f94050b762\"" Nov 12 20:44:38.426983 systemd[1]: Started cri-containerd-8f8e9fd10ddf6f9f595026797475adc0aa6d4a964c838e5a4b0e6f1408468ddd.scope - libcontainer container 8f8e9fd10ddf6f9f595026797475adc0aa6d4a964c838e5a4b0e6f1408468ddd. Nov 12 20:44:38.434101 systemd[1]: Started cri-containerd-b8d65cb3e16714775c1094d43a0a972576cb1e6ec96eeb8fe6c45a8d89d95b7a.scope - libcontainer container b8d65cb3e16714775c1094d43a0a972576cb1e6ec96eeb8fe6c45a8d89d95b7a. Nov 12 20:44:38.439001 systemd[1]: Started cri-containerd-74bce8abae8de4f9fd11ba7e18c506deddb10417a74aa71e4ba289f94050b762.scope - libcontainer container 74bce8abae8de4f9fd11ba7e18c506deddb10417a74aa71e4ba289f94050b762. Nov 12 20:44:38.763177 containerd[1478]: time="2024-11-12T20:44:38.762989574Z" level=info msg="StartContainer for \"8f8e9fd10ddf6f9f595026797475adc0aa6d4a964c838e5a4b0e6f1408468ddd\" returns successfully" Nov 12 20:44:38.763177 containerd[1478]: time="2024-11-12T20:44:38.762992240Z" level=info msg="StartContainer for \"74bce8abae8de4f9fd11ba7e18c506deddb10417a74aa71e4ba289f94050b762\" returns successfully" Nov 12 20:44:38.763177 containerd[1478]: time="2024-11-12T20:44:38.763150026Z" level=info msg="StartContainer for \"b8d65cb3e16714775c1094d43a0a972576cb1e6ec96eeb8fe6c45a8d89d95b7a\" returns successfully" Nov 12 20:44:38.781458 kubelet[2300]: E1112 20:44:38.781421 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:38.786638 kubelet[2300]: E1112 20:44:38.786511 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:39.788287 kubelet[2300]: E1112 20:44:39.788250 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:39.788737 kubelet[2300]: E1112 20:44:39.788385 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:39.823498 kubelet[2300]: E1112 20:44:39.823443 2300 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 20:44:39.823663 kubelet[2300]: E1112 20:44:39.823584 2300 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:44:39.927584 kubelet[2300]: I1112 20:44:39.927547 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:40.082416 kubelet[2300]: I1112 20:44:40.082281 2300 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:44:40.165134 kubelet[2300]: E1112 20:44:40.165107 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:40.427924 kubelet[2300]: E1112 20:44:40.427806 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:40.528717 kubelet[2300]: E1112 20:44:40.528665 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:40.629841 kubelet[2300]: E1112 20:44:40.629781 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:40.730666 kubelet[2300]: E1112 20:44:40.730496 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:40.830618 kubelet[2300]: E1112 20:44:40.830572 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:40.931218 kubelet[2300]: E1112 20:44:40.931159 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.032047 kubelet[2300]: E1112 20:44:41.031985 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.133113 kubelet[2300]: E1112 20:44:41.133057 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.233779 kubelet[2300]: E1112 20:44:41.233695 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.334448 kubelet[2300]: E1112 20:44:41.334308 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.435006 kubelet[2300]: E1112 20:44:41.434912 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.535966 kubelet[2300]: E1112 20:44:41.535913 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.636981 kubelet[2300]: E1112 20:44:41.636805 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.737885 kubelet[2300]: E1112 20:44:41.737802 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.838995 kubelet[2300]: E1112 20:44:41.838941 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:41.939562 kubelet[2300]: E1112 20:44:41.939435 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.040121 kubelet[2300]: E1112 20:44:42.040072 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.140225 kubelet[2300]: E1112 20:44:42.140161 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.240839 kubelet[2300]: E1112 20:44:42.240686 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.341342 kubelet[2300]: E1112 20:44:42.341283 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.441988 kubelet[2300]: E1112 20:44:42.441927 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.542121 kubelet[2300]: E1112 20:44:42.542044 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.597396 systemd[1]: Reloading requested from client PID 2581 ('systemctl') (unit session-9.scope)... Nov 12 20:44:42.597420 systemd[1]: Reloading... Nov 12 20:44:42.642868 kubelet[2300]: E1112 20:44:42.642807 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.706687 zram_generator::config[2620]: No configuration found. Nov 12 20:44:42.743421 kubelet[2300]: E1112 20:44:42.743373 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.843869 kubelet[2300]: E1112 20:44:42.843697 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.856016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:44:42.943945 kubelet[2300]: E1112 20:44:42.943900 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:42.977414 systemd[1]: Reloading finished in 379 ms. Nov 12 20:44:43.031008 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:43.051638 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:44:43.052012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:43.062238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:43.234890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:43.239329 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:44:43.291427 kubelet[2665]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:44:43.291427 kubelet[2665]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:44:43.291427 kubelet[2665]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:44:43.291875 kubelet[2665]: I1112 20:44:43.291490 2665 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:44:43.296405 kubelet[2665]: I1112 20:44:43.296368 2665 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 20:44:43.296405 kubelet[2665]: I1112 20:44:43.296393 2665 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:44:43.296619 kubelet[2665]: I1112 20:44:43.296595 2665 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 20:44:43.297858 kubelet[2665]: I1112 20:44:43.297830 2665 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:44:43.299126 kubelet[2665]: I1112 20:44:43.299079 2665 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:44:43.307634 kubelet[2665]: I1112 20:44:43.307575 2665 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:44:43.308221 kubelet[2665]: I1112 20:44:43.307877 2665 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:44:43.308778 kubelet[2665]: I1112 20:44:43.307925 2665 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:44:43.308891 kubelet[2665]: I1112 20:44:43.308798 2665 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:44:43.308891 kubelet[2665]: I1112 20:44:43.308814 2665 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:44:43.308891 kubelet[2665]: I1112 20:44:43.308885 2665 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:44:43.309054 kubelet[2665]: I1112 20:44:43.309015 2665 kubelet.go:400] "Attempting to sync node with API server" Nov 12 20:44:43.309226 kubelet[2665]: I1112 20:44:43.309059 2665 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:44:43.309226 kubelet[2665]: I1112 20:44:43.309088 2665 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:44:43.309291 kubelet[2665]: I1112 20:44:43.309228 2665 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:44:43.310339 kubelet[2665]: I1112 20:44:43.310112 2665 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:44:43.310459 kubelet[2665]: I1112 20:44:43.310426 2665 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:44:43.311056 kubelet[2665]: I1112 20:44:43.311026 2665 server.go:1264] "Started kubelet" Nov 12 20:44:43.312932 kubelet[2665]: I1112 20:44:43.311327 2665 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:44:43.312932 kubelet[2665]: I1112 20:44:43.312204 2665 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:44:43.312932 kubelet[2665]: I1112 20:44:43.312568 2665 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:44:43.312932 kubelet[2665]: I1112 20:44:43.312736 2665 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:44:43.314211 kubelet[2665]: I1112 20:44:43.314183 2665 server.go:455] "Adding debug handlers to kubelet server" Nov 12 20:44:43.319743 kubelet[2665]: E1112 20:44:43.319663 2665 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:43.319743 kubelet[2665]: I1112 20:44:43.319714 2665 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:44:43.319995 kubelet[2665]: I1112 20:44:43.319969 2665 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 20:44:43.320216 kubelet[2665]: I1112 20:44:43.320192 2665 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:44:43.324706 kubelet[2665]: E1112 20:44:43.323824 2665 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:44:43.324706 kubelet[2665]: I1112 20:44:43.324176 2665 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:44:43.327883 kubelet[2665]: I1112 20:44:43.327840 2665 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:44:43.327883 kubelet[2665]: I1112 20:44:43.327860 2665 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:44:43.328544 kubelet[2665]: I1112 20:44:43.328414 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:44:43.330207 kubelet[2665]: I1112 20:44:43.329847 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:44:43.330207 kubelet[2665]: I1112 20:44:43.329892 2665 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:44:43.330207 kubelet[2665]: I1112 20:44:43.329913 2665 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 20:44:43.330207 kubelet[2665]: E1112 20:44:43.329967 2665 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:44:43.364488 kubelet[2665]: I1112 20:44:43.364456 2665 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:44:43.364488 kubelet[2665]: I1112 20:44:43.364474 2665 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:44:43.364488 kubelet[2665]: I1112 20:44:43.364493 2665 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:44:43.364731 kubelet[2665]: I1112 20:44:43.364708 2665 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:44:43.364756 kubelet[2665]: I1112 20:44:43.364723 2665 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:44:43.364756 kubelet[2665]: I1112 20:44:43.364747 2665 policy_none.go:49] "None policy: Start" Nov 12 20:44:43.365368 kubelet[2665]: I1112 20:44:43.365347 2665 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:44:43.365410 kubelet[2665]: I1112 20:44:43.365371 2665 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:44:43.365523 kubelet[2665]: I1112 20:44:43.365506 2665 state_mem.go:75] "Updated machine memory state" Nov 12 20:44:43.370369 kubelet[2665]: I1112 20:44:43.370344 2665 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:44:43.370775 kubelet[2665]: I1112 20:44:43.370541 2665 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:44:43.370775 kubelet[2665]: I1112 20:44:43.370694 2665 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:44:43.425612 kubelet[2665]: I1112 20:44:43.425557 2665 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:43.430788 kubelet[2665]: I1112 20:44:43.430714 2665 topology_manager.go:215] "Topology Admit Handler" podUID="35a50a3f0f14abbdd3fae477f39e6e18" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:44:43.430917 kubelet[2665]: I1112 20:44:43.430856 2665 topology_manager.go:215] "Topology Admit Handler" podUID="c95384ce7f39fb5cff38cd36dacf8a69" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:44:43.430951 kubelet[2665]: I1112 20:44:43.430927 2665 topology_manager.go:215] "Topology Admit Handler" podUID="c38721aa07c6c22ac7bea9feb0c13a62" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:44:43.521114 kubelet[2665]: I1112 20:44:43.521025 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:43.621877 kubelet[2665]: I1112 20:44:43.621813 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c38721aa07c6c22ac7bea9feb0c13a62-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c38721aa07c6c22ac7bea9feb0c13a62\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:43.622133 kubelet[2665]: I1112 20:44:43.621893 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:43.622133 kubelet[2665]: I1112 20:44:43.621927 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:43.622133 kubelet[2665]: I1112 20:44:43.621950 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c95384ce7f39fb5cff38cd36dacf8a69-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c95384ce7f39fb5cff38cd36dacf8a69\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:44:43.622133 kubelet[2665]: I1112 20:44:43.621969 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c38721aa07c6c22ac7bea9feb0c13a62-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c38721aa07c6c22ac7bea9feb0c13a62\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:43.622133 kubelet[2665]: I1112 20:44:43.621988 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c38721aa07c6c22ac7bea9feb0c13a62-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c38721aa07c6c22ac7bea9feb0c13a62\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:43.622246 kubelet[2665]: I1112 20:44:43.622008 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:43.622246 kubelet[2665]: I1112 20:44:43.622028 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:43.683185 kubelet[2665]: I1112 20:44:43.682682 2665 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 20:44:43.683185 kubelet[2665]: I1112 20:44:43.682800 2665 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:44:43.878355 kubelet[2665]: E1112 20:44:43.878205 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:43.984217 kubelet[2665]: E1112 20:44:43.984158 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:43.984461 kubelet[2665]: E1112 20:44:43.984429 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:44.310461 kubelet[2665]: I1112 20:44:44.310402 2665 apiserver.go:52] "Watching apiserver" Nov 12 20:44:44.320913 kubelet[2665]: I1112 20:44:44.320861 2665 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 20:44:44.347986 kubelet[2665]: E1112 20:44:44.346500 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:44.347986 kubelet[2665]: E1112 20:44:44.347137 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:44.404247 kubelet[2665]: E1112 20:44:44.404197 2665 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:44.404541 kubelet[2665]: I1112 20:44:44.404477 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.404454489 podStartE2EDuration="1.404454489s" podCreationTimestamp="2024-11-12 20:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:44:44.404290709 +0000 UTC m=+1.161041965" watchObservedRunningTime="2024-11-12 20:44:44.404454489 +0000 UTC m=+1.161205745" Nov 12 20:44:44.404780 kubelet[2665]: E1112 20:44:44.404578 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:44.824363 kubelet[2665]: I1112 20:44:44.824268 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.824243471 podStartE2EDuration="1.824243471s" podCreationTimestamp="2024-11-12 20:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:44:44.645851777 +0000 UTC m=+1.402603023" watchObservedRunningTime="2024-11-12 20:44:44.824243471 +0000 UTC m=+1.580994717" Nov 12 20:44:44.824849 kubelet[2665]: I1112 20:44:44.824455 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.824449169 podStartE2EDuration="1.824449169s" podCreationTimestamp="2024-11-12 20:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:44:44.824229411 +0000 UTC m=+1.580980667" watchObservedRunningTime="2024-11-12 20:44:44.824449169 +0000 UTC m=+1.581200415" Nov 12 20:44:45.347238 kubelet[2665]: E1112 20:44:45.347198 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:45.347705 kubelet[2665]: E1112 20:44:45.347535 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:45.347705 kubelet[2665]: E1112 20:44:45.347535 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:49.999024 sudo[1665]: pam_unix(sudo:session): session closed for user root Nov 12 20:44:50.002183 sshd[1662]: pam_unix(sshd:session): session closed for user core Nov 12 20:44:50.008211 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:55100.service: Deactivated successfully. Nov 12 20:44:50.010897 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:44:50.011135 systemd[1]: session-9.scope: Consumed 5.969s CPU time, 193.2M memory peak, 0B memory swap peak. Nov 12 20:44:50.012042 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:44:50.013203 systemd-logind[1451]: Removed session 9. Nov 12 20:44:50.096295 kubelet[2665]: E1112 20:44:50.096235 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:50.357282 kubelet[2665]: E1112 20:44:50.357251 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:51.359011 kubelet[2665]: E1112 20:44:51.358967 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:52.336133 kubelet[2665]: E1112 20:44:52.336091 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:52.360049 kubelet[2665]: E1112 20:44:52.359973 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:55.219417 kubelet[2665]: E1112 20:44:55.219385 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:58.435451 kubelet[2665]: I1112 20:44:58.435403 2665 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:44:58.435945 containerd[1478]: time="2024-11-12T20:44:58.435848989Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:44:58.436251 kubelet[2665]: I1112 20:44:58.436034 2665 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:44:59.157494 kubelet[2665]: I1112 20:44:59.157436 2665 topology_manager.go:215] "Topology Admit Handler" podUID="da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44" podNamespace="kube-system" podName="kube-proxy-gsxjr" Nov 12 20:44:59.165284 systemd[1]: Created slice kubepods-besteffort-podda1d9fa2_c5e2_4c7c_972e_b08ca7e6ad44.slice - libcontainer container kubepods-besteffort-podda1d9fa2_c5e2_4c7c_972e_b08ca7e6ad44.slice. Nov 12 20:44:59.316337 kubelet[2665]: I1112 20:44:59.316277 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44-lib-modules\") pod \"kube-proxy-gsxjr\" (UID: \"da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44\") " pod="kube-system/kube-proxy-gsxjr" Nov 12 20:44:59.316337 kubelet[2665]: I1112 20:44:59.316320 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvxl\" (UniqueName: \"kubernetes.io/projected/da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44-kube-api-access-jxvxl\") pod \"kube-proxy-gsxjr\" (UID: \"da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44\") " pod="kube-system/kube-proxy-gsxjr" Nov 12 20:44:59.316337 kubelet[2665]: I1112 20:44:59.316337 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44-kube-proxy\") pod \"kube-proxy-gsxjr\" (UID: \"da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44\") " pod="kube-system/kube-proxy-gsxjr" Nov 12 20:44:59.316549 kubelet[2665]: I1112 20:44:59.316374 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44-xtables-lock\") pod \"kube-proxy-gsxjr\" (UID: \"da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44\") " pod="kube-system/kube-proxy-gsxjr" Nov 12 20:44:59.368560 kubelet[2665]: I1112 20:44:59.368491 2665 topology_manager.go:215] "Topology Admit Handler" podUID="cd41bb74-cdd4-4488-b0f9-2f661123ae48" podNamespace="tigera-operator" podName="tigera-operator-5645cfc98-4lt6r" Nov 12 20:44:59.376049 systemd[1]: Created slice kubepods-besteffort-podcd41bb74_cdd4_4488_b0f9_2f661123ae48.slice - libcontainer container kubepods-besteffort-podcd41bb74_cdd4_4488_b0f9_2f661123ae48.slice. Nov 12 20:44:59.517876 kubelet[2665]: I1112 20:44:59.517824 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd41bb74-cdd4-4488-b0f9-2f661123ae48-var-lib-calico\") pod \"tigera-operator-5645cfc98-4lt6r\" (UID: \"cd41bb74-cdd4-4488-b0f9-2f661123ae48\") " pod="tigera-operator/tigera-operator-5645cfc98-4lt6r" Nov 12 20:44:59.517876 kubelet[2665]: I1112 20:44:59.517876 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnsdk\" (UniqueName: \"kubernetes.io/projected/cd41bb74-cdd4-4488-b0f9-2f661123ae48-kube-api-access-cnsdk\") pod \"tigera-operator-5645cfc98-4lt6r\" (UID: \"cd41bb74-cdd4-4488-b0f9-2f661123ae48\") " pod="tigera-operator/tigera-operator-5645cfc98-4lt6r" Nov 12 20:44:59.679375 containerd[1478]: time="2024-11-12T20:44:59.679324475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-4lt6r,Uid:cd41bb74-cdd4-4488-b0f9-2f661123ae48,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:44:59.776193 kubelet[2665]: E1112 20:44:59.775860 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:59.776394 containerd[1478]: time="2024-11-12T20:44:59.776346603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gsxjr,Uid:da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:01.223669 containerd[1478]: time="2024-11-12T20:45:01.223524667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:01.223669 containerd[1478]: time="2024-11-12T20:45:01.223612263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:01.223669 containerd[1478]: time="2024-11-12T20:45:01.223636923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:01.224288 containerd[1478]: time="2024-11-12T20:45:01.223749490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:01.252781 systemd[1]: Started cri-containerd-e5424d45733b5447fb78c73c456dc0d2f7f229940f931f2f6b4a44a3b80839a4.scope - libcontainer container e5424d45733b5447fb78c73c456dc0d2f7f229940f931f2f6b4a44a3b80839a4. Nov 12 20:45:01.286159 containerd[1478]: time="2024-11-12T20:45:01.285983603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:01.286159 containerd[1478]: time="2024-11-12T20:45:01.286047903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:01.286159 containerd[1478]: time="2024-11-12T20:45:01.286092553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:01.286948 containerd[1478]: time="2024-11-12T20:45:01.286228706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:01.294263 containerd[1478]: time="2024-11-12T20:45:01.294209308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-4lt6r,Uid:cd41bb74-cdd4-4488-b0f9-2f661123ae48,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e5424d45733b5447fb78c73c456dc0d2f7f229940f931f2f6b4a44a3b80839a4\"" Nov 12 20:45:01.298414 containerd[1478]: time="2024-11-12T20:45:01.298366365Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:45:01.312925 systemd[1]: Started cri-containerd-a924906b2c52b84cdb962fe238353a77aeb0d4e69170e2620d6c2eb5a05b900c.scope - libcontainer container a924906b2c52b84cdb962fe238353a77aeb0d4e69170e2620d6c2eb5a05b900c. Nov 12 20:45:01.335585 containerd[1478]: time="2024-11-12T20:45:01.335467205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gsxjr,Uid:da1d9fa2-c5e2-4c7c-972e-b08ca7e6ad44,Namespace:kube-system,Attempt:0,} returns sandbox id \"a924906b2c52b84cdb962fe238353a77aeb0d4e69170e2620d6c2eb5a05b900c\"" Nov 12 20:45:01.336243 kubelet[2665]: E1112 20:45:01.336217 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:01.338618 containerd[1478]: time="2024-11-12T20:45:01.338554468Z" level=info msg="CreateContainer within sandbox \"a924906b2c52b84cdb962fe238353a77aeb0d4e69170e2620d6c2eb5a05b900c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:45:01.533821 containerd[1478]: time="2024-11-12T20:45:01.533744640Z" level=info msg="CreateContainer within sandbox \"a924906b2c52b84cdb962fe238353a77aeb0d4e69170e2620d6c2eb5a05b900c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63bafdf7563a38a2afd5e9d7f620a49e44b37e219484575ccd28b0a0865ac528\"" Nov 12 20:45:01.534582 containerd[1478]: time="2024-11-12T20:45:01.534548489Z" level=info msg="StartContainer for \"63bafdf7563a38a2afd5e9d7f620a49e44b37e219484575ccd28b0a0865ac528\"" Nov 12 20:45:01.566789 systemd[1]: Started cri-containerd-63bafdf7563a38a2afd5e9d7f620a49e44b37e219484575ccd28b0a0865ac528.scope - libcontainer container 63bafdf7563a38a2afd5e9d7f620a49e44b37e219484575ccd28b0a0865ac528. Nov 12 20:45:01.604100 containerd[1478]: time="2024-11-12T20:45:01.603994636Z" level=info msg="StartContainer for \"63bafdf7563a38a2afd5e9d7f620a49e44b37e219484575ccd28b0a0865ac528\" returns successfully" Nov 12 20:45:02.378567 kubelet[2665]: E1112 20:45:02.378538 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:03.379805 kubelet[2665]: E1112 20:45:03.379761 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:06.710475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284766690.mount: Deactivated successfully. Nov 12 20:45:07.083398 containerd[1478]: time="2024-11-12T20:45:07.083308299Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:07.091815 containerd[1478]: time="2024-11-12T20:45:07.091720176Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763343" Nov 12 20:45:07.094115 containerd[1478]: time="2024-11-12T20:45:07.094046603Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:07.097866 containerd[1478]: time="2024-11-12T20:45:07.097811158Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:07.098749 containerd[1478]: time="2024-11-12T20:45:07.098676811Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 5.800254442s" Nov 12 20:45:07.098749 containerd[1478]: time="2024-11-12T20:45:07.098732973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:45:07.104921 containerd[1478]: time="2024-11-12T20:45:07.104877221Z" level=info msg="CreateContainer within sandbox \"e5424d45733b5447fb78c73c456dc0d2f7f229940f931f2f6b4a44a3b80839a4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:45:07.123734 containerd[1478]: time="2024-11-12T20:45:07.123650588Z" level=info msg="CreateContainer within sandbox \"e5424d45733b5447fb78c73c456dc0d2f7f229940f931f2f6b4a44a3b80839a4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49c1f53057a635ab8d84b3a2e05a30a7eda9a683b477ea3b9eaa061164c021a0\"" Nov 12 20:45:07.124406 containerd[1478]: time="2024-11-12T20:45:07.124329586Z" level=info msg="StartContainer for \"49c1f53057a635ab8d84b3a2e05a30a7eda9a683b477ea3b9eaa061164c021a0\"" Nov 12 20:45:07.156810 systemd[1]: Started cri-containerd-49c1f53057a635ab8d84b3a2e05a30a7eda9a683b477ea3b9eaa061164c021a0.scope - libcontainer container 49c1f53057a635ab8d84b3a2e05a30a7eda9a683b477ea3b9eaa061164c021a0. Nov 12 20:45:07.184318 containerd[1478]: time="2024-11-12T20:45:07.184269221Z" level=info msg="StartContainer for \"49c1f53057a635ab8d84b3a2e05a30a7eda9a683b477ea3b9eaa061164c021a0\" returns successfully" Nov 12 20:45:07.404500 kubelet[2665]: I1112 20:45:07.404317 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gsxjr" podStartSLOduration=8.404291664 podStartE2EDuration="8.404291664s" podCreationTimestamp="2024-11-12 20:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:45:02.46383466 +0000 UTC m=+19.220585906" watchObservedRunningTime="2024-11-12 20:45:07.404291664 +0000 UTC m=+24.161042920" Nov 12 20:45:09.972093 kubelet[2665]: I1112 20:45:09.971968 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5645cfc98-4lt6r" podStartSLOduration=5.16498603 podStartE2EDuration="10.971925892s" podCreationTimestamp="2024-11-12 20:44:59 +0000 UTC" firstStartedPulling="2024-11-12 20:45:01.295985142 +0000 UTC m=+18.052736388" lastFinishedPulling="2024-11-12 20:45:07.102924993 +0000 UTC m=+23.859676250" observedRunningTime="2024-11-12 20:45:07.404513449 +0000 UTC m=+24.161264705" watchObservedRunningTime="2024-11-12 20:45:09.971925892 +0000 UTC m=+26.728677138" Nov 12 20:45:09.972636 kubelet[2665]: I1112 20:45:09.972135 2665 topology_manager.go:215] "Topology Admit Handler" podUID="f85b5fcb-f4e4-411a-b4fa-2295e1cd93ac" podNamespace="calico-system" podName="calico-typha-58c5b8cf-p6sj7" Nov 12 20:45:09.989095 systemd[1]: Created slice kubepods-besteffort-podf85b5fcb_f4e4_411a_b4fa_2295e1cd93ac.slice - libcontainer container kubepods-besteffort-podf85b5fcb_f4e4_411a_b4fa_2295e1cd93ac.slice. Nov 12 20:45:10.076292 kubelet[2665]: I1112 20:45:10.076231 2665 topology_manager.go:215] "Topology Admit Handler" podUID="cea1fbbe-8911-46d3-85b9-2ebc34807e70" podNamespace="calico-system" podName="calico-node-nlwcd" Nov 12 20:45:10.085280 systemd[1]: Created slice kubepods-besteffort-podcea1fbbe_8911_46d3_85b9_2ebc34807e70.slice - libcontainer container kubepods-besteffort-podcea1fbbe_8911_46d3_85b9_2ebc34807e70.slice. Nov 12 20:45:10.087453 kubelet[2665]: I1112 20:45:10.087414 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f85b5fcb-f4e4-411a-b4fa-2295e1cd93ac-typha-certs\") pod \"calico-typha-58c5b8cf-p6sj7\" (UID: \"f85b5fcb-f4e4-411a-b4fa-2295e1cd93ac\") " pod="calico-system/calico-typha-58c5b8cf-p6sj7" Nov 12 20:45:10.087551 kubelet[2665]: I1112 20:45:10.087474 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtxt6\" (UniqueName: \"kubernetes.io/projected/f85b5fcb-f4e4-411a-b4fa-2295e1cd93ac-kube-api-access-qtxt6\") pod \"calico-typha-58c5b8cf-p6sj7\" (UID: \"f85b5fcb-f4e4-411a-b4fa-2295e1cd93ac\") " pod="calico-system/calico-typha-58c5b8cf-p6sj7" Nov 12 20:45:10.087551 kubelet[2665]: I1112 20:45:10.087504 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f85b5fcb-f4e4-411a-b4fa-2295e1cd93ac-tigera-ca-bundle\") pod \"calico-typha-58c5b8cf-p6sj7\" (UID: \"f85b5fcb-f4e4-411a-b4fa-2295e1cd93ac\") " pod="calico-system/calico-typha-58c5b8cf-p6sj7" Nov 12 20:45:10.188872 kubelet[2665]: I1112 20:45:10.188816 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cea1fbbe-8911-46d3-85b9-2ebc34807e70-cni-net-dir\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.188872 kubelet[2665]: I1112 20:45:10.188875 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cea1fbbe-8911-46d3-85b9-2ebc34807e70-policysync\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.188872 kubelet[2665]: I1112 20:45:10.188896 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cea1fbbe-8911-46d3-85b9-2ebc34807e70-cni-log-dir\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.189126 kubelet[2665]: I1112 20:45:10.188923 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmnp\" (UniqueName: \"kubernetes.io/projected/cea1fbbe-8911-46d3-85b9-2ebc34807e70-kube-api-access-zjmnp\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.189126 kubelet[2665]: I1112 20:45:10.188971 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cea1fbbe-8911-46d3-85b9-2ebc34807e70-lib-modules\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.189126 kubelet[2665]: I1112 20:45:10.188991 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cea1fbbe-8911-46d3-85b9-2ebc34807e70-var-lib-calico\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.189126 kubelet[2665]: I1112 20:45:10.189010 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cea1fbbe-8911-46d3-85b9-2ebc34807e70-flexvol-driver-host\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.189126 kubelet[2665]: I1112 20:45:10.189031 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cea1fbbe-8911-46d3-85b9-2ebc34807e70-tigera-ca-bundle\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.189316 kubelet[2665]: I1112 20:45:10.189095 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cea1fbbe-8911-46d3-85b9-2ebc34807e70-node-certs\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.189316 kubelet[2665]: I1112 20:45:10.189118 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cea1fbbe-8911-46d3-85b9-2ebc34807e70-var-run-calico\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.189316 kubelet[2665]: I1112 20:45:10.189172 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cea1fbbe-8911-46d3-85b9-2ebc34807e70-xtables-lock\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.189316 kubelet[2665]: I1112 20:45:10.189196 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cea1fbbe-8911-46d3-85b9-2ebc34807e70-cni-bin-dir\") pod \"calico-node-nlwcd\" (UID: \"cea1fbbe-8911-46d3-85b9-2ebc34807e70\") " pod="calico-system/calico-node-nlwcd" Nov 12 20:45:10.297072 kubelet[2665]: E1112 20:45:10.297011 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:10.299903 containerd[1478]: time="2024-11-12T20:45:10.299850653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c5b8cf-p6sj7,Uid:f85b5fcb-f4e4-411a-b4fa-2295e1cd93ac,Namespace:calico-system,Attempt:0,}" Nov 12 20:45:10.307358 kubelet[2665]: E1112 20:45:10.306085 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.307358 kubelet[2665]: W1112 20:45:10.306145 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.307358 kubelet[2665]: E1112 20:45:10.306183 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.313588 kubelet[2665]: I1112 20:45:10.312504 2665 topology_manager.go:215] "Topology Admit Handler" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" podNamespace="calico-system" podName="csi-node-driver-qwkzp" Nov 12 20:45:10.313588 kubelet[2665]: E1112 20:45:10.312877 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:10.315217 kubelet[2665]: E1112 20:45:10.315191 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.315217 kubelet[2665]: W1112 20:45:10.315209 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.315308 kubelet[2665]: E1112 20:45:10.315227 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.350489 containerd[1478]: time="2024-11-12T20:45:10.350116889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:10.350489 containerd[1478]: time="2024-11-12T20:45:10.350194474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:10.350489 containerd[1478]: time="2024-11-12T20:45:10.350206097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:10.350489 containerd[1478]: time="2024-11-12T20:45:10.350330125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:10.378909 systemd[1]: Started cri-containerd-84ad741abec60c4544ca5e38f30fdc38c8c03984b27109ac7d063f017070a9a3.scope - libcontainer container 84ad741abec60c4544ca5e38f30fdc38c8c03984b27109ac7d063f017070a9a3. Nov 12 20:45:10.387564 kubelet[2665]: E1112 20:45:10.387512 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.387877 kubelet[2665]: W1112 20:45:10.387722 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.387877 kubelet[2665]: E1112 20:45:10.387759 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.388582 kubelet[2665]: E1112 20:45:10.388434 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.388582 kubelet[2665]: W1112 20:45:10.388448 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.388582 kubelet[2665]: E1112 20:45:10.388460 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.389010 kubelet[2665]: E1112 20:45:10.388832 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.389010 kubelet[2665]: W1112 20:45:10.388893 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.389010 kubelet[2665]: E1112 20:45:10.388907 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.389768 kubelet[2665]: E1112 20:45:10.389613 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.389768 kubelet[2665]: W1112 20:45:10.389693 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.389768 kubelet[2665]: E1112 20:45:10.389706 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.391345 kubelet[2665]: E1112 20:45:10.391263 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.391345 kubelet[2665]: W1112 20:45:10.391295 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.391345 kubelet[2665]: E1112 20:45:10.391325 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.391899 kubelet[2665]: E1112 20:45:10.391851 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.391899 kubelet[2665]: W1112 20:45:10.391868 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.391899 kubelet[2665]: E1112 20:45:10.391881 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.392327 kubelet[2665]: E1112 20:45:10.392291 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.392327 kubelet[2665]: W1112 20:45:10.392313 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.392914 kubelet[2665]: E1112 20:45:10.392329 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.392914 kubelet[2665]: E1112 20:45:10.392707 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.392914 kubelet[2665]: W1112 20:45:10.392720 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.392914 kubelet[2665]: E1112 20:45:10.392732 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.393344 kubelet[2665]: E1112 20:45:10.393324 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:10.394105 kubelet[2665]: E1112 20:45:10.394091 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.394184 kubelet[2665]: W1112 20:45:10.394169 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.394266 kubelet[2665]: E1112 20:45:10.394255 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.394513 kubelet[2665]: I1112 20:45:10.394486 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb4c4a07-9a98-43af-84e7-91573664a62a-kubelet-dir\") pod \"csi-node-driver-qwkzp\" (UID: \"fb4c4a07-9a98-43af-84e7-91573664a62a\") " pod="calico-system/csi-node-driver-qwkzp" Nov 12 20:45:10.394725 kubelet[2665]: E1112 20:45:10.394602 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.394725 kubelet[2665]: W1112 20:45:10.394672 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.394725 kubelet[2665]: E1112 20:45:10.394682 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.394978 containerd[1478]: time="2024-11-12T20:45:10.394926516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nlwcd,Uid:cea1fbbe-8911-46d3-85b9-2ebc34807e70,Namespace:calico-system,Attempt:0,}" Nov 12 20:45:10.395153 kubelet[2665]: E1112 20:45:10.395098 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.395153 kubelet[2665]: W1112 20:45:10.395108 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.395153 kubelet[2665]: E1112 20:45:10.395119 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.395729 kubelet[2665]: E1112 20:45:10.395665 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.395729 kubelet[2665]: W1112 20:45:10.395676 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.395729 kubelet[2665]: E1112 20:45:10.395686 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.396056 kubelet[2665]: E1112 20:45:10.395999 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.396056 kubelet[2665]: W1112 20:45:10.396010 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.396056 kubelet[2665]: E1112 20:45:10.396019 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.396637 kubelet[2665]: E1112 20:45:10.396609 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.397313 kubelet[2665]: W1112 20:45:10.397244 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.397581 kubelet[2665]: E1112 20:45:10.397564 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.397832 kubelet[2665]: E1112 20:45:10.397811 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.397832 kubelet[2665]: W1112 20:45:10.397830 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.398035 kubelet[2665]: E1112 20:45:10.397995 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.398687 kubelet[2665]: E1112 20:45:10.398672 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.398873 kubelet[2665]: W1112 20:45:10.398756 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.398873 kubelet[2665]: E1112 20:45:10.398826 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.398873 kubelet[2665]: I1112 20:45:10.398855 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fb4c4a07-9a98-43af-84e7-91573664a62a-varrun\") pod \"csi-node-driver-qwkzp\" (UID: \"fb4c4a07-9a98-43af-84e7-91573664a62a\") " pod="calico-system/csi-node-driver-qwkzp" Nov 12 20:45:10.399311 kubelet[2665]: E1112 20:45:10.399221 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.399311 kubelet[2665]: W1112 20:45:10.399234 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.400727 kubelet[2665]: E1112 20:45:10.399614 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.400727 kubelet[2665]: W1112 20:45:10.399652 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.400727 kubelet[2665]: E1112 20:45:10.399685 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.400727 kubelet[2665]: E1112 20:45:10.399906 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.400938 kubelet[2665]: E1112 20:45:10.400912 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.401227 kubelet[2665]: W1112 20:45:10.400934 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.401267 kubelet[2665]: E1112 20:45:10.401232 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.401747 kubelet[2665]: E1112 20:45:10.401713 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.401747 kubelet[2665]: W1112 20:45:10.401734 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.401824 kubelet[2665]: E1112 20:45:10.401747 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.402669 kubelet[2665]: E1112 20:45:10.402514 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.402669 kubelet[2665]: W1112 20:45:10.402546 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.402669 kubelet[2665]: E1112 20:45:10.402559 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.404087 kubelet[2665]: E1112 20:45:10.402899 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.404087 kubelet[2665]: W1112 20:45:10.402919 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.404087 kubelet[2665]: E1112 20:45:10.402931 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.404087 kubelet[2665]: E1112 20:45:10.403242 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.404087 kubelet[2665]: W1112 20:45:10.403257 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.404087 kubelet[2665]: E1112 20:45:10.403275 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.404087 kubelet[2665]: E1112 20:45:10.403842 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.404087 kubelet[2665]: W1112 20:45:10.403853 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.404087 kubelet[2665]: E1112 20:45:10.403864 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.408095 kubelet[2665]: E1112 20:45:10.404104 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.408095 kubelet[2665]: W1112 20:45:10.404115 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.408095 kubelet[2665]: E1112 20:45:10.404126 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.408095 kubelet[2665]: E1112 20:45:10.404387 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.408095 kubelet[2665]: W1112 20:45:10.404397 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.408095 kubelet[2665]: E1112 20:45:10.404411 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.435274 containerd[1478]: time="2024-11-12T20:45:10.435150062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c5b8cf-p6sj7,Uid:f85b5fcb-f4e4-411a-b4fa-2295e1cd93ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"84ad741abec60c4544ca5e38f30fdc38c8c03984b27109ac7d063f017070a9a3\"" Nov 12 20:45:10.439081 kubelet[2665]: E1112 20:45:10.439042 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:10.445018 containerd[1478]: time="2024-11-12T20:45:10.444982170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:45:10.459922 containerd[1478]: time="2024-11-12T20:45:10.459739147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:10.459922 containerd[1478]: time="2024-11-12T20:45:10.459846742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:10.459922 containerd[1478]: time="2024-11-12T20:45:10.459868084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:10.460138 containerd[1478]: time="2024-11-12T20:45:10.460014607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:10.484900 systemd[1]: Started cri-containerd-86deb587cb850e9bca20563410b994010644126da06e4397c1aed753ba961ba3.scope - libcontainer container 86deb587cb850e9bca20563410b994010644126da06e4397c1aed753ba961ba3. Nov 12 20:45:10.508366 kubelet[2665]: E1112 20:45:10.508332 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.508714 kubelet[2665]: W1112 20:45:10.508549 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.508714 kubelet[2665]: E1112 20:45:10.508585 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.509192 kubelet[2665]: E1112 20:45:10.509096 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.509192 kubelet[2665]: W1112 20:45:10.509109 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.509192 kubelet[2665]: E1112 20:45:10.509127 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.510142 kubelet[2665]: E1112 20:45:10.509821 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.510142 kubelet[2665]: W1112 20:45:10.509835 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.510142 kubelet[2665]: E1112 20:45:10.509865 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.510142 kubelet[2665]: I1112 20:45:10.509890 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fb4c4a07-9a98-43af-84e7-91573664a62a-socket-dir\") pod \"csi-node-driver-qwkzp\" (UID: \"fb4c4a07-9a98-43af-84e7-91573664a62a\") " pod="calico-system/csi-node-driver-qwkzp" Nov 12 20:45:10.510471 kubelet[2665]: E1112 20:45:10.510338 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.510471 kubelet[2665]: W1112 20:45:10.510352 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.510471 kubelet[2665]: E1112 20:45:10.510373 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.510712 kubelet[2665]: E1112 20:45:10.510698 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.510986 kubelet[2665]: W1112 20:45:10.510914 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.512635 kubelet[2665]: E1112 20:45:10.511068 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.512635 kubelet[2665]: E1112 20:45:10.512108 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.512635 kubelet[2665]: W1112 20:45:10.512133 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.512635 kubelet[2665]: E1112 20:45:10.512423 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.512635 kubelet[2665]: W1112 20:45:10.512433 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.512860 kubelet[2665]: E1112 20:45:10.512841 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.512939 kubelet[2665]: E1112 20:45:10.512925 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.513049 kubelet[2665]: I1112 20:45:10.513028 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fb4c4a07-9a98-43af-84e7-91573664a62a-registration-dir\") pod \"csi-node-driver-qwkzp\" (UID: \"fb4c4a07-9a98-43af-84e7-91573664a62a\") " pod="calico-system/csi-node-driver-qwkzp" Nov 12 20:45:10.513302 kubelet[2665]: E1112 20:45:10.513287 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.513384 kubelet[2665]: W1112 20:45:10.513370 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.513522 kubelet[2665]: E1112 20:45:10.513506 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.513846 kubelet[2665]: E1112 20:45:10.513832 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.513938 kubelet[2665]: W1112 20:45:10.513923 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.514093 kubelet[2665]: E1112 20:45:10.514076 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.514342 kubelet[2665]: E1112 20:45:10.514329 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.514415 kubelet[2665]: W1112 20:45:10.514401 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.514614 kubelet[2665]: E1112 20:45:10.514600 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.514782 kubelet[2665]: E1112 20:45:10.514769 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.514849 kubelet[2665]: W1112 20:45:10.514836 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.515001 kubelet[2665]: E1112 20:45:10.514987 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.515099 kubelet[2665]: I1112 20:45:10.515084 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5mh8\" (UniqueName: \"kubernetes.io/projected/fb4c4a07-9a98-43af-84e7-91573664a62a-kube-api-access-v5mh8\") pod \"csi-node-driver-qwkzp\" (UID: \"fb4c4a07-9a98-43af-84e7-91573664a62a\") " pod="calico-system/csi-node-driver-qwkzp" Nov 12 20:45:10.515392 kubelet[2665]: E1112 20:45:10.515378 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.515469 kubelet[2665]: W1112 20:45:10.515454 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.515658 kubelet[2665]: E1112 20:45:10.515608 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.516340 kubelet[2665]: E1112 20:45:10.516319 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.516340 kubelet[2665]: W1112 20:45:10.516333 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.516496 kubelet[2665]: E1112 20:45:10.516447 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.516677 kubelet[2665]: E1112 20:45:10.516663 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.516717 kubelet[2665]: W1112 20:45:10.516677 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.516792 kubelet[2665]: E1112 20:45:10.516738 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.517147 kubelet[2665]: E1112 20:45:10.517048 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.517147 kubelet[2665]: W1112 20:45:10.517058 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.517417 kubelet[2665]: E1112 20:45:10.517336 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.517557 kubelet[2665]: E1112 20:45:10.517470 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.517557 kubelet[2665]: W1112 20:45:10.517480 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.517557 kubelet[2665]: E1112 20:45:10.517501 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.517968 kubelet[2665]: E1112 20:45:10.517855 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.517968 kubelet[2665]: W1112 20:45:10.517865 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.517968 kubelet[2665]: E1112 20:45:10.517887 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.518400 kubelet[2665]: E1112 20:45:10.518301 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.518400 kubelet[2665]: W1112 20:45:10.518312 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.518400 kubelet[2665]: E1112 20:45:10.518326 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.518677 kubelet[2665]: E1112 20:45:10.518613 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.518677 kubelet[2665]: W1112 20:45:10.518638 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.518677 kubelet[2665]: E1112 20:45:10.518661 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.523291 containerd[1478]: time="2024-11-12T20:45:10.523198225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nlwcd,Uid:cea1fbbe-8911-46d3-85b9-2ebc34807e70,Namespace:calico-system,Attempt:0,} returns sandbox id \"86deb587cb850e9bca20563410b994010644126da06e4397c1aed753ba961ba3\"" Nov 12 20:45:10.524077 kubelet[2665]: E1112 20:45:10.524039 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:10.620111 kubelet[2665]: E1112 20:45:10.619992 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.620111 kubelet[2665]: W1112 20:45:10.620012 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.620111 kubelet[2665]: E1112 20:45:10.620034 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.620325 kubelet[2665]: E1112 20:45:10.620230 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.620325 kubelet[2665]: W1112 20:45:10.620238 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.620325 kubelet[2665]: E1112 20:45:10.620251 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.620489 kubelet[2665]: E1112 20:45:10.620475 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.620489 kubelet[2665]: W1112 20:45:10.620485 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.620564 kubelet[2665]: E1112 20:45:10.620510 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.620804 kubelet[2665]: E1112 20:45:10.620790 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.620804 kubelet[2665]: W1112 20:45:10.620801 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.620963 kubelet[2665]: E1112 20:45:10.620814 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.621067 kubelet[2665]: E1112 20:45:10.621048 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.621067 kubelet[2665]: W1112 20:45:10.621066 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.621154 kubelet[2665]: E1112 20:45:10.621084 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.621416 kubelet[2665]: E1112 20:45:10.621381 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.621416 kubelet[2665]: W1112 20:45:10.621393 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.621494 kubelet[2665]: E1112 20:45:10.621450 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.621696 kubelet[2665]: E1112 20:45:10.621680 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.621696 kubelet[2665]: W1112 20:45:10.621693 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.621819 kubelet[2665]: E1112 20:45:10.621719 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.622182 kubelet[2665]: E1112 20:45:10.622012 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.622182 kubelet[2665]: W1112 20:45:10.622026 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.622182 kubelet[2665]: E1112 20:45:10.622045 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.622416 kubelet[2665]: E1112 20:45:10.622321 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.622743 kubelet[2665]: W1112 20:45:10.622478 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.622743 kubelet[2665]: E1112 20:45:10.622502 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.622931 kubelet[2665]: E1112 20:45:10.622917 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.623048 kubelet[2665]: W1112 20:45:10.623031 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.623241 kubelet[2665]: E1112 20:45:10.623218 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.623515 kubelet[2665]: E1112 20:45:10.623484 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.623515 kubelet[2665]: W1112 20:45:10.623506 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.623614 kubelet[2665]: E1112 20:45:10.623519 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.624169 kubelet[2665]: E1112 20:45:10.624148 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.624216 kubelet[2665]: W1112 20:45:10.624169 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.624216 kubelet[2665]: E1112 20:45:10.624182 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.626694 kubelet[2665]: E1112 20:45:10.626678 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.626694 kubelet[2665]: W1112 20:45:10.626691 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.626777 kubelet[2665]: E1112 20:45:10.626703 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.626880 kubelet[2665]: E1112 20:45:10.626866 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.626880 kubelet[2665]: W1112 20:45:10.626874 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.626927 kubelet[2665]: E1112 20:45:10.626882 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.637273 kubelet[2665]: E1112 20:45:10.637245 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.637273 kubelet[2665]: W1112 20:45:10.637267 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.637422 kubelet[2665]: E1112 20:45:10.637286 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:10.649970 kubelet[2665]: E1112 20:45:10.649944 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:10.649970 kubelet[2665]: W1112 20:45:10.649962 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:10.650131 kubelet[2665]: E1112 20:45:10.649980 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:12.331067 kubelet[2665]: E1112 20:45:12.330966 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:14.553558 kubelet[2665]: E1112 20:45:14.553478 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:14.638659 containerd[1478]: time="2024-11-12T20:45:14.638576068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:14.652998 containerd[1478]: time="2024-11-12T20:45:14.652889376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:45:14.665642 containerd[1478]: time="2024-11-12T20:45:14.665554459Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:14.695543 containerd[1478]: time="2024-11-12T20:45:14.695479132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:14.696178 containerd[1478]: time="2024-11-12T20:45:14.696129027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 4.250965885s" Nov 12 20:45:14.696178 containerd[1478]: time="2024-11-12T20:45:14.696169007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:45:14.698688 containerd[1478]: time="2024-11-12T20:45:14.698644822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:45:14.709096 containerd[1478]: time="2024-11-12T20:45:14.709036924Z" level=info msg="CreateContainer within sandbox \"84ad741abec60c4544ca5e38f30fdc38c8c03984b27109ac7d063f017070a9a3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:45:15.401024 containerd[1478]: time="2024-11-12T20:45:15.400921963Z" level=info msg="CreateContainer within sandbox \"84ad741abec60c4544ca5e38f30fdc38c8c03984b27109ac7d063f017070a9a3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ed1b70a5c8465a4134438171423250b4f39b56fb07e7259e6448683baaeef6b3\"" Nov 12 20:45:15.401592 containerd[1478]: time="2024-11-12T20:45:15.401566909Z" level=info msg="StartContainer for \"ed1b70a5c8465a4134438171423250b4f39b56fb07e7259e6448683baaeef6b3\"" Nov 12 20:45:15.435878 systemd[1]: Started cri-containerd-ed1b70a5c8465a4134438171423250b4f39b56fb07e7259e6448683baaeef6b3.scope - libcontainer container ed1b70a5c8465a4134438171423250b4f39b56fb07e7259e6448683baaeef6b3. Nov 12 20:45:15.833658 containerd[1478]: time="2024-11-12T20:45:15.833595666Z" level=info msg="StartContainer for \"ed1b70a5c8465a4134438171423250b4f39b56fb07e7259e6448683baaeef6b3\" returns successfully" Nov 12 20:45:16.330954 kubelet[2665]: E1112 20:45:16.330898 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:16.422591 kubelet[2665]: E1112 20:45:16.422544 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:16.445884 kubelet[2665]: I1112 20:45:16.445749 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58c5b8cf-p6sj7" podStartSLOduration=3.189959345 podStartE2EDuration="7.44572594s" podCreationTimestamp="2024-11-12 20:45:09 +0000 UTC" firstStartedPulling="2024-11-12 20:45:10.442379472 +0000 UTC m=+27.199130718" lastFinishedPulling="2024-11-12 20:45:14.698146067 +0000 UTC m=+31.454897313" observedRunningTime="2024-11-12 20:45:16.445346462 +0000 UTC m=+33.202097708" watchObservedRunningTime="2024-11-12 20:45:16.44572594 +0000 UTC m=+33.202477196" Nov 12 20:45:16.448454 kubelet[2665]: E1112 20:45:16.448400 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.448558 kubelet[2665]: W1112 20:45:16.448454 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.448558 kubelet[2665]: E1112 20:45:16.448484 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.448836 kubelet[2665]: E1112 20:45:16.448818 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.448836 kubelet[2665]: W1112 20:45:16.448833 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.448945 kubelet[2665]: E1112 20:45:16.448845 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.449126 kubelet[2665]: E1112 20:45:16.449107 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.449126 kubelet[2665]: W1112 20:45:16.449122 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.449126 kubelet[2665]: E1112 20:45:16.449133 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.449467 kubelet[2665]: E1112 20:45:16.449437 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.449467 kubelet[2665]: W1112 20:45:16.449454 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.449467 kubelet[2665]: E1112 20:45:16.449464 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.449728 kubelet[2665]: E1112 20:45:16.449701 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.449728 kubelet[2665]: W1112 20:45:16.449716 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.449728 kubelet[2665]: E1112 20:45:16.449727 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.450056 kubelet[2665]: E1112 20:45:16.450019 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.450056 kubelet[2665]: W1112 20:45:16.450046 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.450154 kubelet[2665]: E1112 20:45:16.450086 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.450722 kubelet[2665]: E1112 20:45:16.450437 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.450722 kubelet[2665]: W1112 20:45:16.450461 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.450722 kubelet[2665]: E1112 20:45:16.450485 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.450900 kubelet[2665]: E1112 20:45:16.450862 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.450900 kubelet[2665]: W1112 20:45:16.450892 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.450973 kubelet[2665]: E1112 20:45:16.450926 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.451463 kubelet[2665]: E1112 20:45:16.451434 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.451463 kubelet[2665]: W1112 20:45:16.451455 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.451463 kubelet[2665]: E1112 20:45:16.451469 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.451746 kubelet[2665]: E1112 20:45:16.451728 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.451746 kubelet[2665]: W1112 20:45:16.451739 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.451746 kubelet[2665]: E1112 20:45:16.451749 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.452006 kubelet[2665]: E1112 20:45:16.451984 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.452006 kubelet[2665]: W1112 20:45:16.451994 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.452006 kubelet[2665]: E1112 20:45:16.452003 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.452243 kubelet[2665]: E1112 20:45:16.452228 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.452243 kubelet[2665]: W1112 20:45:16.452242 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.452314 kubelet[2665]: E1112 20:45:16.452255 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.452477 kubelet[2665]: E1112 20:45:16.452465 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.452477 kubelet[2665]: W1112 20:45:16.452476 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.452539 kubelet[2665]: E1112 20:45:16.452485 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.452797 kubelet[2665]: E1112 20:45:16.452735 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.452797 kubelet[2665]: W1112 20:45:16.452745 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.452797 kubelet[2665]: E1112 20:45:16.452755 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.453017 kubelet[2665]: E1112 20:45:16.452925 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.453017 kubelet[2665]: W1112 20:45:16.452932 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.453017 kubelet[2665]: E1112 20:45:16.452941 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.464465 kubelet[2665]: E1112 20:45:16.464420 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.464465 kubelet[2665]: W1112 20:45:16.464446 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.464465 kubelet[2665]: E1112 20:45:16.464468 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.464750 kubelet[2665]: E1112 20:45:16.464726 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.464750 kubelet[2665]: W1112 20:45:16.464741 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.464798 kubelet[2665]: E1112 20:45:16.464756 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.465007 kubelet[2665]: E1112 20:45:16.464984 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.465007 kubelet[2665]: W1112 20:45:16.465000 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.465063 kubelet[2665]: E1112 20:45:16.465015 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.465267 kubelet[2665]: E1112 20:45:16.465243 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.465267 kubelet[2665]: W1112 20:45:16.465257 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.465267 kubelet[2665]: E1112 20:45:16.465274 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.465644 kubelet[2665]: E1112 20:45:16.465601 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.465644 kubelet[2665]: W1112 20:45:16.465619 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.465723 kubelet[2665]: E1112 20:45:16.465663 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.465997 kubelet[2665]: E1112 20:45:16.465968 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.466046 kubelet[2665]: W1112 20:45:16.465997 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.466069 kubelet[2665]: E1112 20:45:16.466038 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.466304 kubelet[2665]: E1112 20:45:16.466289 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.466304 kubelet[2665]: W1112 20:45:16.466300 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.466367 kubelet[2665]: E1112 20:45:16.466315 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.466591 kubelet[2665]: E1112 20:45:16.466565 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.466591 kubelet[2665]: W1112 20:45:16.466582 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.466658 kubelet[2665]: E1112 20:45:16.466598 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.466861 kubelet[2665]: E1112 20:45:16.466846 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.466861 kubelet[2665]: W1112 20:45:16.466858 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.466927 kubelet[2665]: E1112 20:45:16.466872 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.467108 kubelet[2665]: E1112 20:45:16.467094 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.467108 kubelet[2665]: W1112 20:45:16.467104 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.467173 kubelet[2665]: E1112 20:45:16.467118 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.467337 kubelet[2665]: E1112 20:45:16.467322 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.467337 kubelet[2665]: W1112 20:45:16.467334 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.467449 kubelet[2665]: E1112 20:45:16.467348 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.467552 kubelet[2665]: E1112 20:45:16.467539 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.467552 kubelet[2665]: W1112 20:45:16.467549 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.467599 kubelet[2665]: E1112 20:45:16.467561 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.467841 kubelet[2665]: E1112 20:45:16.467824 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.467841 kubelet[2665]: W1112 20:45:16.467836 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.467918 kubelet[2665]: E1112 20:45:16.467849 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.468147 kubelet[2665]: E1112 20:45:16.468131 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.468147 kubelet[2665]: W1112 20:45:16.468145 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.468200 kubelet[2665]: E1112 20:45:16.468157 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.468343 kubelet[2665]: E1112 20:45:16.468329 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.468343 kubelet[2665]: W1112 20:45:16.468340 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.468396 kubelet[2665]: E1112 20:45:16.468353 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.468556 kubelet[2665]: E1112 20:45:16.468541 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.468556 kubelet[2665]: W1112 20:45:16.468552 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.468603 kubelet[2665]: E1112 20:45:16.468559 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.468775 kubelet[2665]: E1112 20:45:16.468762 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.468775 kubelet[2665]: W1112 20:45:16.468771 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.468837 kubelet[2665]: E1112 20:45:16.468779 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.469099 kubelet[2665]: E1112 20:45:16.469069 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:45:16.469099 kubelet[2665]: W1112 20:45:16.469090 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:45:16.469099 kubelet[2665]: E1112 20:45:16.469098 2665 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:45:16.841218 containerd[1478]: time="2024-11-12T20:45:16.841140953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:16.841985 containerd[1478]: time="2024-11-12T20:45:16.841944868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:45:16.843258 containerd[1478]: time="2024-11-12T20:45:16.843194804Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:16.845383 containerd[1478]: time="2024-11-12T20:45:16.845338663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:16.846209 containerd[1478]: time="2024-11-12T20:45:16.846168681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 2.14748452s" Nov 12 20:45:16.846209 containerd[1478]: time="2024-11-12T20:45:16.846203830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:45:16.849561 containerd[1478]: time="2024-11-12T20:45:16.849481675Z" level=info msg="CreateContainer within sandbox \"86deb587cb850e9bca20563410b994010644126da06e4397c1aed753ba961ba3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:45:16.870073 containerd[1478]: time="2024-11-12T20:45:16.870027202Z" level=info msg="CreateContainer within sandbox \"86deb587cb850e9bca20563410b994010644126da06e4397c1aed753ba961ba3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3f4250069a831febad5a028f7f7c6211475690605e54728a8e3877a8fe9c79ea\"" Nov 12 20:45:16.870694 containerd[1478]: time="2024-11-12T20:45:16.870604822Z" level=info msg="StartContainer for \"3f4250069a831febad5a028f7f7c6211475690605e54728a8e3877a8fe9c79ea\"" Nov 12 20:45:16.908202 systemd[1]: Started cri-containerd-3f4250069a831febad5a028f7f7c6211475690605e54728a8e3877a8fe9c79ea.scope - libcontainer container 3f4250069a831febad5a028f7f7c6211475690605e54728a8e3877a8fe9c79ea. Nov 12 20:45:16.941944 containerd[1478]: time="2024-11-12T20:45:16.941775179Z" level=info msg="StartContainer for \"3f4250069a831febad5a028f7f7c6211475690605e54728a8e3877a8fe9c79ea\" returns successfully" Nov 12 20:45:16.955968 systemd[1]: cri-containerd-3f4250069a831febad5a028f7f7c6211475690605e54728a8e3877a8fe9c79ea.scope: Deactivated successfully. Nov 12 20:45:16.983021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f4250069a831febad5a028f7f7c6211475690605e54728a8e3877a8fe9c79ea-rootfs.mount: Deactivated successfully. Nov 12 20:45:17.013706 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:49338.service - OpenSSH per-connection server daemon (10.0.0.1:49338). Nov 12 20:45:17.424842 kubelet[2665]: I1112 20:45:17.424809 2665 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:45:17.425372 kubelet[2665]: E1112 20:45:17.425316 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:17.425424 kubelet[2665]: E1112 20:45:17.425405 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:17.590828 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 49338 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:17.592767 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:17.601932 systemd-logind[1451]: New session 10 of user core. Nov 12 20:45:17.607760 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:45:18.024530 sshd[3362]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:18.028915 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:49338.service: Deactivated successfully. Nov 12 20:45:18.030863 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:45:18.031857 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:45:18.033011 systemd-logind[1451]: Removed session 10. Nov 12 20:45:18.330732 kubelet[2665]: E1112 20:45:18.330576 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:18.427134 kubelet[2665]: E1112 20:45:18.427089 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:18.466241 containerd[1478]: time="2024-11-12T20:45:18.463847898Z" level=info msg="shim disconnected" id=3f4250069a831febad5a028f7f7c6211475690605e54728a8e3877a8fe9c79ea namespace=k8s.io Nov 12 20:45:18.466241 containerd[1478]: time="2024-11-12T20:45:18.466220903Z" level=warning msg="cleaning up after shim disconnected" id=3f4250069a831febad5a028f7f7c6211475690605e54728a8e3877a8fe9c79ea namespace=k8s.io Nov 12 20:45:18.466241 containerd[1478]: time="2024-11-12T20:45:18.466231893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:45:19.429248 kubelet[2665]: E1112 20:45:19.429219 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:19.429822 containerd[1478]: time="2024-11-12T20:45:19.429794507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:45:20.330782 kubelet[2665]: E1112 20:45:20.330727 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:22.331116 kubelet[2665]: E1112 20:45:22.331038 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:23.036349 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:49354.service - OpenSSH per-connection server daemon (10.0.0.1:49354). Nov 12 20:45:23.094037 sshd[3400]: Accepted publickey for core from 10.0.0.1 port 49354 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:23.095937 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:23.100865 systemd-logind[1451]: New session 11 of user core. Nov 12 20:45:23.109878 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:45:23.226758 sshd[3400]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:23.230985 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:49354.service: Deactivated successfully. Nov 12 20:45:23.233326 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:45:23.233999 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:45:23.235065 systemd-logind[1451]: Removed session 11. Nov 12 20:45:24.330761 kubelet[2665]: E1112 20:45:24.330599 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:26.200576 containerd[1478]: time="2024-11-12T20:45:26.200012419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:26.208092 containerd[1478]: time="2024-11-12T20:45:26.207466333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:45:26.237580 containerd[1478]: time="2024-11-12T20:45:26.237502119Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:26.264666 containerd[1478]: time="2024-11-12T20:45:26.264548183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:26.265571 containerd[1478]: time="2024-11-12T20:45:26.265453655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 6.835619545s" Nov 12 20:45:26.265571 containerd[1478]: time="2024-11-12T20:45:26.265515820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:45:26.269899 containerd[1478]: time="2024-11-12T20:45:26.269828504Z" level=info msg="CreateContainer within sandbox \"86deb587cb850e9bca20563410b994010644126da06e4397c1aed753ba961ba3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:45:26.331173 kubelet[2665]: E1112 20:45:26.330568 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:26.586746 containerd[1478]: time="2024-11-12T20:45:26.586660343Z" level=info msg="CreateContainer within sandbox \"86deb587cb850e9bca20563410b994010644126da06e4397c1aed753ba961ba3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"04c8d2921b1eb6c293705b48f57a1e0d8f2178a8af06da688cedf84ca4c817ff\"" Nov 12 20:45:26.587618 containerd[1478]: time="2024-11-12T20:45:26.587508520Z" level=info msg="StartContainer for \"04c8d2921b1eb6c293705b48f57a1e0d8f2178a8af06da688cedf84ca4c817ff\"" Nov 12 20:45:26.632053 systemd[1]: Started cri-containerd-04c8d2921b1eb6c293705b48f57a1e0d8f2178a8af06da688cedf84ca4c817ff.scope - libcontainer container 04c8d2921b1eb6c293705b48f57a1e0d8f2178a8af06da688cedf84ca4c817ff. Nov 12 20:45:27.005175 containerd[1478]: time="2024-11-12T20:45:27.005028610Z" level=info msg="StartContainer for \"04c8d2921b1eb6c293705b48f57a1e0d8f2178a8af06da688cedf84ca4c817ff\" returns successfully" Nov 12 20:45:27.445879 kubelet[2665]: E1112 20:45:27.445837 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:28.240330 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:60276.service - OpenSSH per-connection server daemon (10.0.0.1:60276). Nov 12 20:45:28.330611 kubelet[2665]: E1112 20:45:28.330542 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:28.338367 sshd[3458]: Accepted publickey for core from 10.0.0.1 port 60276 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:28.340234 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:28.345395 systemd-logind[1451]: New session 12 of user core. Nov 12 20:45:28.349822 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:45:29.498171 sshd[3458]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:29.503029 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:60276.service: Deactivated successfully. Nov 12 20:45:29.505395 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:45:29.506162 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:45:29.507419 systemd-logind[1451]: Removed session 12. Nov 12 20:45:30.331144 kubelet[2665]: E1112 20:45:30.331081 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:30.458696 systemd[1]: cri-containerd-04c8d2921b1eb6c293705b48f57a1e0d8f2178a8af06da688cedf84ca4c817ff.scope: Deactivated successfully. Nov 12 20:45:30.478804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04c8d2921b1eb6c293705b48f57a1e0d8f2178a8af06da688cedf84ca4c817ff-rootfs.mount: Deactivated successfully. Nov 12 20:45:30.520951 kubelet[2665]: I1112 20:45:30.520915 2665 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:45:31.009781 kubelet[2665]: I1112 20:45:31.009514 2665 topology_manager.go:215] "Topology Admit Handler" podUID="e89180de-1bfe-48ec-9535-6f9b7004bbe3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x8dc6" Nov 12 20:45:31.013967 kubelet[2665]: I1112 20:45:31.013869 2665 topology_manager.go:215] "Topology Admit Handler" podUID="5baef4de-6c0a-45e7-ba9b-68f67c3817e2" podNamespace="calico-apiserver" podName="calico-apiserver-868b9f8d6-nv4z7" Nov 12 20:45:31.014919 kubelet[2665]: I1112 20:45:31.014886 2665 topology_manager.go:215] "Topology Admit Handler" podUID="40b8e542-9add-41b9-aa96-e7a054affecb" podNamespace="calico-apiserver" podName="calico-apiserver-868b9f8d6-cpnj4" Nov 12 20:45:31.017822 kubelet[2665]: I1112 20:45:31.017728 2665 topology_manager.go:215] "Topology Admit Handler" podUID="ae276fed-5dad-46fe-8b4a-5f71fa73249a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s7cws" Nov 12 20:45:31.018685 kubelet[2665]: I1112 20:45:31.018653 2665 topology_manager.go:215] "Topology Admit Handler" podUID="f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50" podNamespace="calico-system" podName="calico-kube-controllers-7c846d996f-fnghq" Nov 12 20:45:31.023491 systemd[1]: Created slice kubepods-burstable-pode89180de_1bfe_48ec_9535_6f9b7004bbe3.slice - libcontainer container kubepods-burstable-pode89180de_1bfe_48ec_9535_6f9b7004bbe3.slice. Nov 12 20:45:31.032644 systemd[1]: Created slice kubepods-besteffort-podf7c966cf_6ba7_4ec6_b4dd_6239e2ce2d50.slice - libcontainer container kubepods-besteffort-podf7c966cf_6ba7_4ec6_b4dd_6239e2ce2d50.slice. Nov 12 20:45:31.040060 systemd[1]: Created slice kubepods-besteffort-pod40b8e542_9add_41b9_aa96_e7a054affecb.slice - libcontainer container kubepods-besteffort-pod40b8e542_9add_41b9_aa96_e7a054affecb.slice. Nov 12 20:45:31.045971 systemd[1]: Created slice kubepods-burstable-podae276fed_5dad_46fe_8b4a_5f71fa73249a.slice - libcontainer container kubepods-burstable-podae276fed_5dad_46fe_8b4a_5f71fa73249a.slice. Nov 12 20:45:31.059651 systemd[1]: Created slice kubepods-besteffort-pod5baef4de_6c0a_45e7_ba9b_68f67c3817e2.slice - libcontainer container kubepods-besteffort-pod5baef4de_6c0a_45e7_ba9b_68f67c3817e2.slice. Nov 12 20:45:31.064746 kubelet[2665]: I1112 20:45:31.064701 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e89180de-1bfe-48ec-9535-6f9b7004bbe3-config-volume\") pod \"coredns-7db6d8ff4d-x8dc6\" (UID: \"e89180de-1bfe-48ec-9535-6f9b7004bbe3\") " pod="kube-system/coredns-7db6d8ff4d-x8dc6" Nov 12 20:45:31.064746 kubelet[2665]: I1112 20:45:31.064749 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxngn\" (UniqueName: \"kubernetes.io/projected/f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50-kube-api-access-zxngn\") pod \"calico-kube-controllers-7c846d996f-fnghq\" (UID: \"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50\") " pod="calico-system/calico-kube-controllers-7c846d996f-fnghq" Nov 12 20:45:31.064984 kubelet[2665]: I1112 20:45:31.064780 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5baef4de-6c0a-45e7-ba9b-68f67c3817e2-calico-apiserver-certs\") pod \"calico-apiserver-868b9f8d6-nv4z7\" (UID: \"5baef4de-6c0a-45e7-ba9b-68f67c3817e2\") " pod="calico-apiserver/calico-apiserver-868b9f8d6-nv4z7" Nov 12 20:45:31.064984 kubelet[2665]: I1112 20:45:31.064804 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50-tigera-ca-bundle\") pod \"calico-kube-controllers-7c846d996f-fnghq\" (UID: \"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50\") " pod="calico-system/calico-kube-controllers-7c846d996f-fnghq" Nov 12 20:45:31.064984 kubelet[2665]: I1112 20:45:31.064827 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae276fed-5dad-46fe-8b4a-5f71fa73249a-config-volume\") pod \"coredns-7db6d8ff4d-s7cws\" (UID: \"ae276fed-5dad-46fe-8b4a-5f71fa73249a\") " pod="kube-system/coredns-7db6d8ff4d-s7cws" Nov 12 20:45:31.064984 kubelet[2665]: I1112 20:45:31.064848 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfg8s\" (UniqueName: \"kubernetes.io/projected/e89180de-1bfe-48ec-9535-6f9b7004bbe3-kube-api-access-cfg8s\") pod \"coredns-7db6d8ff4d-x8dc6\" (UID: \"e89180de-1bfe-48ec-9535-6f9b7004bbe3\") " pod="kube-system/coredns-7db6d8ff4d-x8dc6" Nov 12 20:45:31.064984 kubelet[2665]: I1112 20:45:31.064870 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdf2p\" (UniqueName: \"kubernetes.io/projected/ae276fed-5dad-46fe-8b4a-5f71fa73249a-kube-api-access-zdf2p\") pod \"coredns-7db6d8ff4d-s7cws\" (UID: \"ae276fed-5dad-46fe-8b4a-5f71fa73249a\") " pod="kube-system/coredns-7db6d8ff4d-s7cws" Nov 12 20:45:31.065152 kubelet[2665]: I1112 20:45:31.064929 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpkr\" (UniqueName: \"kubernetes.io/projected/5baef4de-6c0a-45e7-ba9b-68f67c3817e2-kube-api-access-6xpkr\") pod \"calico-apiserver-868b9f8d6-nv4z7\" (UID: \"5baef4de-6c0a-45e7-ba9b-68f67c3817e2\") " pod="calico-apiserver/calico-apiserver-868b9f8d6-nv4z7" Nov 12 20:45:31.065152 kubelet[2665]: I1112 20:45:31.064952 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl6bx\" (UniqueName: \"kubernetes.io/projected/40b8e542-9add-41b9-aa96-e7a054affecb-kube-api-access-bl6bx\") pod \"calico-apiserver-868b9f8d6-cpnj4\" (UID: \"40b8e542-9add-41b9-aa96-e7a054affecb\") " pod="calico-apiserver/calico-apiserver-868b9f8d6-cpnj4" Nov 12 20:45:31.065152 kubelet[2665]: I1112 20:45:31.064990 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40b8e542-9add-41b9-aa96-e7a054affecb-calico-apiserver-certs\") pod \"calico-apiserver-868b9f8d6-cpnj4\" (UID: \"40b8e542-9add-41b9-aa96-e7a054affecb\") " pod="calico-apiserver/calico-apiserver-868b9f8d6-cpnj4" Nov 12 20:45:31.276605 containerd[1478]: time="2024-11-12T20:45:31.276447005Z" level=info msg="shim disconnected" id=04c8d2921b1eb6c293705b48f57a1e0d8f2178a8af06da688cedf84ca4c817ff namespace=k8s.io Nov 12 20:45:31.276605 containerd[1478]: time="2024-11-12T20:45:31.276501436Z" level=warning msg="cleaning up after shim disconnected" id=04c8d2921b1eb6c293705b48f57a1e0d8f2178a8af06da688cedf84ca4c817ff namespace=k8s.io Nov 12 20:45:31.276605 containerd[1478]: time="2024-11-12T20:45:31.276510333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:45:31.328175 kubelet[2665]: E1112 20:45:31.328097 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:31.329126 containerd[1478]: time="2024-11-12T20:45:31.329078469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x8dc6,Uid:e89180de-1bfe-48ec-9535-6f9b7004bbe3,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:31.340788 containerd[1478]: time="2024-11-12T20:45:31.340736200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c846d996f-fnghq,Uid:f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50,Namespace:calico-system,Attempt:0,}" Nov 12 20:45:31.343768 containerd[1478]: time="2024-11-12T20:45:31.343712195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868b9f8d6-cpnj4,Uid:40b8e542-9add-41b9-aa96-e7a054affecb,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:45:31.357728 kubelet[2665]: E1112 20:45:31.357665 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:31.358331 containerd[1478]: time="2024-11-12T20:45:31.358288614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7cws,Uid:ae276fed-5dad-46fe-8b4a-5f71fa73249a,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:31.363669 containerd[1478]: time="2024-11-12T20:45:31.363568602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868b9f8d6-nv4z7,Uid:5baef4de-6c0a-45e7-ba9b-68f67c3817e2,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:45:31.447482 containerd[1478]: time="2024-11-12T20:45:31.447413663Z" level=error msg="Failed to destroy network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.448214 containerd[1478]: time="2024-11-12T20:45:31.448027607Z" level=error msg="Failed to destroy network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.448265 containerd[1478]: time="2024-11-12T20:45:31.448234242Z" level=error msg="encountered an error cleaning up failed sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.448341 containerd[1478]: time="2024-11-12T20:45:31.448309462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x8dc6,Uid:e89180de-1bfe-48ec-9535-6f9b7004bbe3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.448739 kubelet[2665]: E1112 20:45:31.448677 2665 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.448826 kubelet[2665]: E1112 20:45:31.448782 2665 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x8dc6" Nov 12 20:45:31.448826 kubelet[2665]: E1112 20:45:31.448811 2665 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x8dc6" Nov 12 20:45:31.450413 containerd[1478]: time="2024-11-12T20:45:31.450373458Z" level=error msg="encountered an error cleaning up failed sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.450512 containerd[1478]: time="2024-11-12T20:45:31.450446815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c846d996f-fnghq,Uid:f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.450596 kubelet[2665]: E1112 20:45:31.448888 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x8dc6_kube-system(e89180de-1bfe-48ec-9535-6f9b7004bbe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x8dc6_kube-system(e89180de-1bfe-48ec-9535-6f9b7004bbe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x8dc6" podUID="e89180de-1bfe-48ec-9535-6f9b7004bbe3" Nov 12 20:45:31.452142 kubelet[2665]: E1112 20:45:31.452077 2665 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.452215 kubelet[2665]: E1112 20:45:31.452165 2665 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c846d996f-fnghq" Nov 12 20:45:31.452251 kubelet[2665]: E1112 20:45:31.452209 2665 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c846d996f-fnghq" Nov 12 20:45:31.454121 kubelet[2665]: E1112 20:45:31.452439 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c846d996f-fnghq_calico-system(f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c846d996f-fnghq_calico-system(f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c846d996f-fnghq" podUID="f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50" Nov 12 20:45:31.457502 kubelet[2665]: E1112 20:45:31.457454 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:31.458417 containerd[1478]: time="2024-11-12T20:45:31.458384044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:45:31.459987 kubelet[2665]: I1112 20:45:31.459961 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Nov 12 20:45:31.461537 containerd[1478]: time="2024-11-12T20:45:31.460929096Z" level=info msg="StopPodSandbox for \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\"" Nov 12 20:45:31.461537 containerd[1478]: time="2024-11-12T20:45:31.461132284Z" level=info msg="Ensure that sandbox ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846 in task-service has been cleanup successfully" Nov 12 20:45:31.465779 kubelet[2665]: I1112 20:45:31.465735 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:45:31.467523 containerd[1478]: time="2024-11-12T20:45:31.467171547Z" level=info msg="StopPodSandbox for \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\"" Nov 12 20:45:31.469579 containerd[1478]: time="2024-11-12T20:45:31.467421432Z" level=info msg="Ensure that sandbox 1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a in task-service has been cleanup successfully" Nov 12 20:45:31.523014 containerd[1478]: time="2024-11-12T20:45:31.521302995Z" level=error msg="Failed to destroy network for sandbox \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.523014 containerd[1478]: time="2024-11-12T20:45:31.522785628Z" level=error msg="encountered an error cleaning up failed sandbox \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.523014 containerd[1478]: time="2024-11-12T20:45:31.522845890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868b9f8d6-cpnj4,Uid:40b8e542-9add-41b9-aa96-e7a054affecb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.525027 kubelet[2665]: E1112 20:45:31.524950 2665 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.525126 kubelet[2665]: E1112 20:45:31.525061 2665 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868b9f8d6-cpnj4" Nov 12 20:45:31.525126 kubelet[2665]: E1112 20:45:31.525093 2665 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868b9f8d6-cpnj4" Nov 12 20:45:31.525208 kubelet[2665]: E1112 20:45:31.525151 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-868b9f8d6-cpnj4_calico-apiserver(40b8e542-9add-41b9-aa96-e7a054affecb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-868b9f8d6-cpnj4_calico-apiserver(40b8e542-9add-41b9-aa96-e7a054affecb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868b9f8d6-cpnj4" podUID="40b8e542-9add-41b9-aa96-e7a054affecb" Nov 12 20:45:31.525729 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058-shm.mount: Deactivated successfully. Nov 12 20:45:31.540727 containerd[1478]: time="2024-11-12T20:45:31.540556579Z" level=error msg="StopPodSandbox for \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\" failed" error="failed to destroy network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.541990 kubelet[2665]: E1112 20:45:31.541919 2665 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:45:31.542092 kubelet[2665]: E1112 20:45:31.542008 2665 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a"} Nov 12 20:45:31.542129 kubelet[2665]: E1112 20:45:31.542091 2665 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:45:31.542228 kubelet[2665]: E1112 20:45:31.542125 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c846d996f-fnghq" podUID="f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50" Nov 12 20:45:31.543823 containerd[1478]: time="2024-11-12T20:45:31.543780226Z" level=error msg="StopPodSandbox for \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\" failed" error="failed to destroy network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.544276 kubelet[2665]: E1112 20:45:31.544009 2665 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Nov 12 20:45:31.544276 kubelet[2665]: E1112 20:45:31.544047 2665 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846"} Nov 12 20:45:31.544276 kubelet[2665]: E1112 20:45:31.544077 2665 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e89180de-1bfe-48ec-9535-6f9b7004bbe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:45:31.544276 kubelet[2665]: E1112 20:45:31.544104 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e89180de-1bfe-48ec-9535-6f9b7004bbe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x8dc6" podUID="e89180de-1bfe-48ec-9535-6f9b7004bbe3" Nov 12 20:45:31.545186 containerd[1478]: time="2024-11-12T20:45:31.545141602Z" level=error msg="Failed to destroy network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.547991 containerd[1478]: time="2024-11-12T20:45:31.547932081Z" level=error msg="encountered an error cleaning up failed sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.547991 containerd[1478]: time="2024-11-12T20:45:31.547985330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7cws,Uid:ae276fed-5dad-46fe-8b4a-5f71fa73249a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.548102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2-shm.mount: Deactivated successfully. Nov 12 20:45:31.548253 kubelet[2665]: E1112 20:45:31.548198 2665 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.548303 kubelet[2665]: E1112 20:45:31.548257 2665 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s7cws" Nov 12 20:45:31.548303 kubelet[2665]: E1112 20:45:31.548277 2665 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s7cws" Nov 12 20:45:31.549284 kubelet[2665]: E1112 20:45:31.548557 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s7cws_kube-system(ae276fed-5dad-46fe-8b4a-5f71fa73249a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s7cws_kube-system(ae276fed-5dad-46fe-8b4a-5f71fa73249a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s7cws" podUID="ae276fed-5dad-46fe-8b4a-5f71fa73249a" Nov 12 20:45:31.549638 containerd[1478]: time="2024-11-12T20:45:31.549575594Z" level=error msg="Failed to destroy network for sandbox \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.550111 containerd[1478]: time="2024-11-12T20:45:31.550057191Z" level=error msg="encountered an error cleaning up failed sandbox \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.550161 containerd[1478]: time="2024-11-12T20:45:31.550118064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868b9f8d6-nv4z7,Uid:5baef4de-6c0a-45e7-ba9b-68f67c3817e2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.550369 kubelet[2665]: E1112 20:45:31.550288 2665 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:31.550369 kubelet[2665]: E1112 20:45:31.550341 2665 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868b9f8d6-nv4z7" Nov 12 20:45:31.550369 kubelet[2665]: E1112 20:45:31.550363 2665 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868b9f8d6-nv4z7" Nov 12 20:45:31.550503 kubelet[2665]: E1112 20:45:31.550412 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-868b9f8d6-nv4z7_calico-apiserver(5baef4de-6c0a-45e7-ba9b-68f67c3817e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-868b9f8d6-nv4z7_calico-apiserver(5baef4de-6c0a-45e7-ba9b-68f67c3817e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868b9f8d6-nv4z7" podUID="5baef4de-6c0a-45e7-ba9b-68f67c3817e2" Nov 12 20:45:31.553269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331-shm.mount: Deactivated successfully. Nov 12 20:45:32.339949 systemd[1]: Created slice kubepods-besteffort-podfb4c4a07_9a98_43af_84e7_91573664a62a.slice - libcontainer container kubepods-besteffort-podfb4c4a07_9a98_43af_84e7_91573664a62a.slice. Nov 12 20:45:32.343099 containerd[1478]: time="2024-11-12T20:45:32.343060768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qwkzp,Uid:fb4c4a07-9a98-43af-84e7-91573664a62a,Namespace:calico-system,Attempt:0,}" Nov 12 20:45:32.469473 kubelet[2665]: I1112 20:45:32.469410 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" Nov 12 20:45:32.470042 containerd[1478]: time="2024-11-12T20:45:32.469960912Z" level=info msg="StopPodSandbox for \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\"" Nov 12 20:45:32.470208 containerd[1478]: time="2024-11-12T20:45:32.470181885Z" level=info msg="Ensure that sandbox 02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331 in task-service has been cleanup successfully" Nov 12 20:45:32.471154 kubelet[2665]: I1112 20:45:32.471108 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:45:32.471817 containerd[1478]: time="2024-11-12T20:45:32.471778925Z" level=info msg="StopPodSandbox for \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\"" Nov 12 20:45:32.472162 containerd[1478]: time="2024-11-12T20:45:32.471924316Z" level=info msg="Ensure that sandbox 5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2 in task-service has been cleanup successfully" Nov 12 20:45:32.472439 kubelet[2665]: I1112 20:45:32.472389 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" Nov 12 20:45:32.472819 containerd[1478]: time="2024-11-12T20:45:32.472784662Z" level=info msg="StopPodSandbox for \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\"" Nov 12 20:45:32.473010 containerd[1478]: time="2024-11-12T20:45:32.472989404Z" level=info msg="Ensure that sandbox 48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058 in task-service has been cleanup successfully" Nov 12 20:45:32.507474 containerd[1478]: time="2024-11-12T20:45:32.506941697Z" level=error msg="StopPodSandbox for \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\" failed" error="failed to destroy network for sandbox \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:32.507646 kubelet[2665]: E1112 20:45:32.507274 2665 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" Nov 12 20:45:32.507646 kubelet[2665]: E1112 20:45:32.507345 2665 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058"} Nov 12 20:45:32.507646 kubelet[2665]: E1112 20:45:32.507393 2665 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40b8e542-9add-41b9-aa96-e7a054affecb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:45:32.507646 kubelet[2665]: E1112 20:45:32.507427 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40b8e542-9add-41b9-aa96-e7a054affecb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868b9f8d6-cpnj4" podUID="40b8e542-9add-41b9-aa96-e7a054affecb" Nov 12 20:45:32.508631 containerd[1478]: time="2024-11-12T20:45:32.508540060Z" level=error msg="StopPodSandbox for \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\" failed" error="failed to destroy network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:32.508845 kubelet[2665]: E1112 20:45:32.508806 2665 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:45:32.508845 kubelet[2665]: E1112 20:45:32.508842 2665 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2"} Nov 12 20:45:32.508939 kubelet[2665]: E1112 20:45:32.508869 2665 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae276fed-5dad-46fe-8b4a-5f71fa73249a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:45:32.508939 kubelet[2665]: E1112 20:45:32.508895 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae276fed-5dad-46fe-8b4a-5f71fa73249a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s7cws" podUID="ae276fed-5dad-46fe-8b4a-5f71fa73249a" Nov 12 20:45:32.517848 containerd[1478]: time="2024-11-12T20:45:32.517796945Z" level=error msg="StopPodSandbox for \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\" failed" error="failed to destroy network for sandbox \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:32.518301 kubelet[2665]: E1112 20:45:32.518252 2665 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" Nov 12 20:45:32.518404 kubelet[2665]: E1112 20:45:32.518323 2665 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331"} Nov 12 20:45:32.518404 kubelet[2665]: E1112 20:45:32.518366 2665 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5baef4de-6c0a-45e7-ba9b-68f67c3817e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:45:32.518521 kubelet[2665]: E1112 20:45:32.518411 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5baef4de-6c0a-45e7-ba9b-68f67c3817e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868b9f8d6-nv4z7" podUID="5baef4de-6c0a-45e7-ba9b-68f67c3817e2" Nov 12 20:45:32.933887 containerd[1478]: time="2024-11-12T20:45:32.933808899Z" level=error msg="Failed to destroy network for sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:32.936907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5-shm.mount: Deactivated successfully. Nov 12 20:45:32.937478 containerd[1478]: time="2024-11-12T20:45:32.937402926Z" level=error msg="encountered an error cleaning up failed sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:32.937690 containerd[1478]: time="2024-11-12T20:45:32.937601005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qwkzp,Uid:fb4c4a07-9a98-43af-84e7-91573664a62a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:32.938752 kubelet[2665]: E1112 20:45:32.938529 2665 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:32.938752 kubelet[2665]: E1112 20:45:32.938584 2665 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qwkzp" Nov 12 20:45:32.938752 kubelet[2665]: E1112 20:45:32.938608 2665 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qwkzp" Nov 12 20:45:32.939283 kubelet[2665]: E1112 20:45:32.938671 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qwkzp_calico-system(fb4c4a07-9a98-43af-84e7-91573664a62a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qwkzp_calico-system(fb4c4a07-9a98-43af-84e7-91573664a62a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:33.474130 kubelet[2665]: I1112 20:45:33.474086 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:45:33.474751 containerd[1478]: time="2024-11-12T20:45:33.474703667Z" level=info msg="StopPodSandbox for \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\"" Nov 12 20:45:33.475212 containerd[1478]: time="2024-11-12T20:45:33.474894204Z" level=info msg="Ensure that sandbox 045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5 in task-service has been cleanup successfully" Nov 12 20:45:33.501976 containerd[1478]: time="2024-11-12T20:45:33.501910862Z" level=error msg="StopPodSandbox for \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\" failed" error="failed to destroy network for sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:33.502245 kubelet[2665]: E1112 20:45:33.502194 2665 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:45:33.502333 kubelet[2665]: E1112 20:45:33.502255 2665 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5"} Nov 12 20:45:33.502333 kubelet[2665]: E1112 20:45:33.502291 2665 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb4c4a07-9a98-43af-84e7-91573664a62a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:45:33.502333 kubelet[2665]: E1112 20:45:33.502316 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb4c4a07-9a98-43af-84e7-91573664a62a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qwkzp" podUID="fb4c4a07-9a98-43af-84e7-91573664a62a" Nov 12 20:45:34.509387 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:60288.service - OpenSSH per-connection server daemon (10.0.0.1:60288). Nov 12 20:45:34.790363 sshd[3862]: Accepted publickey for core from 10.0.0.1 port 60288 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:34.792195 sshd[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:34.796915 systemd-logind[1451]: New session 13 of user core. Nov 12 20:45:34.807934 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:45:35.038712 sshd[3862]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:35.042876 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:60288.service: Deactivated successfully. Nov 12 20:45:35.044929 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:45:35.045666 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:45:35.046604 systemd-logind[1451]: Removed session 13. Nov 12 20:45:37.918839 kubelet[2665]: I1112 20:45:37.918778 2665 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:45:37.919661 kubelet[2665]: E1112 20:45:37.919619 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:38.667401 kubelet[2665]: E1112 20:45:38.667370 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:40.056407 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:50578.service - OpenSSH per-connection server daemon (10.0.0.1:50578). Nov 12 20:45:40.095216 sshd[3884]: Accepted publickey for core from 10.0.0.1 port 50578 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:40.097478 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:40.103013 systemd-logind[1451]: New session 14 of user core. Nov 12 20:45:40.108780 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:45:40.198778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099880240.mount: Deactivated successfully. Nov 12 20:45:42.332119 containerd[1478]: time="2024-11-12T20:45:42.332032323Z" level=info msg="StopPodSandbox for \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\"" Nov 12 20:45:42.615022 sshd[3884]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:42.615589 containerd[1478]: time="2024-11-12T20:45:42.615114310Z" level=error msg="StopPodSandbox for \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\" failed" error="failed to destroy network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:42.615857 kubelet[2665]: E1112 20:45:42.615417 2665 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:45:42.615857 kubelet[2665]: E1112 20:45:42.615485 2665 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a"} Nov 12 20:45:42.615857 kubelet[2665]: E1112 20:45:42.615536 2665 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:45:42.615857 kubelet[2665]: E1112 20:45:42.615567 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c846d996f-fnghq" podUID="f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50" Nov 12 20:45:42.620305 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:50578.service: Deactivated successfully. Nov 12 20:45:42.623315 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:45:42.624373 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:45:42.625707 systemd-logind[1451]: Removed session 14. Nov 12 20:45:42.705540 containerd[1478]: time="2024-11-12T20:45:42.705449829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:42.715396 containerd[1478]: time="2024-11-12T20:45:42.715302467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:45:42.752741 containerd[1478]: time="2024-11-12T20:45:42.752681923Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:43.082376 containerd[1478]: time="2024-11-12T20:45:43.082315581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:43.083496 containerd[1478]: time="2024-11-12T20:45:43.083137879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 11.624557889s" Nov 12 20:45:43.083496 containerd[1478]: time="2024-11-12T20:45:43.083185629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:45:43.098821 containerd[1478]: time="2024-11-12T20:45:43.098784008Z" level=info msg="CreateContainer within sandbox \"86deb587cb850e9bca20563410b994010644126da06e4397c1aed753ba961ba3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:45:43.338349 containerd[1478]: time="2024-11-12T20:45:43.336528520Z" level=info msg="StopPodSandbox for \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\"" Nov 12 20:45:43.367014 containerd[1478]: time="2024-11-12T20:45:43.366949860Z" level=error msg="StopPodSandbox for \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\" failed" error="failed to destroy network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:43.367236 kubelet[2665]: E1112 20:45:43.367172 2665 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:45:43.367297 kubelet[2665]: E1112 20:45:43.367244 2665 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2"} Nov 12 20:45:43.367297 kubelet[2665]: E1112 20:45:43.367280 2665 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae276fed-5dad-46fe-8b4a-5f71fa73249a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:45:43.367412 kubelet[2665]: E1112 20:45:43.367304 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae276fed-5dad-46fe-8b4a-5f71fa73249a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s7cws" podUID="ae276fed-5dad-46fe-8b4a-5f71fa73249a" Nov 12 20:45:43.707246 containerd[1478]: time="2024-11-12T20:45:43.707054083Z" level=info msg="CreateContainer within sandbox \"86deb587cb850e9bca20563410b994010644126da06e4397c1aed753ba961ba3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ecd6a0945ba999ec285ea59af0045adadaeb02bc8b81f3159f0c8de4c6193343\"" Nov 12 20:45:43.707874 containerd[1478]: time="2024-11-12T20:45:43.707832578Z" level=info msg="StartContainer for \"ecd6a0945ba999ec285ea59af0045adadaeb02bc8b81f3159f0c8de4c6193343\"" Nov 12 20:45:43.792867 systemd[1]: Started cri-containerd-ecd6a0945ba999ec285ea59af0045adadaeb02bc8b81f3159f0c8de4c6193343.scope - libcontainer container ecd6a0945ba999ec285ea59af0045adadaeb02bc8b81f3159f0c8de4c6193343. Nov 12 20:45:43.883571 containerd[1478]: time="2024-11-12T20:45:43.883490574Z" level=info msg="StartContainer for \"ecd6a0945ba999ec285ea59af0045adadaeb02bc8b81f3159f0c8de4c6193343\" returns successfully" Nov 12 20:45:43.960101 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:45:43.961081 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:45:44.331297 containerd[1478]: time="2024-11-12T20:45:44.331237984Z" level=info msg="StopPodSandbox for \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\"" Nov 12 20:45:44.364687 containerd[1478]: time="2024-11-12T20:45:44.364592918Z" level=error msg="StopPodSandbox for \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\" failed" error="failed to destroy network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:45:44.365105 kubelet[2665]: E1112 20:45:44.364893 2665 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Nov 12 20:45:44.365105 kubelet[2665]: E1112 20:45:44.364955 2665 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846"} Nov 12 20:45:44.365105 kubelet[2665]: E1112 20:45:44.364996 2665 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e89180de-1bfe-48ec-9535-6f9b7004bbe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:45:44.365105 kubelet[2665]: E1112 20:45:44.365027 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e89180de-1bfe-48ec-9535-6f9b7004bbe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x8dc6" podUID="e89180de-1bfe-48ec-9535-6f9b7004bbe3" Nov 12 20:45:44.680726 kubelet[2665]: E1112 20:45:44.680301 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:44.800130 kubelet[2665]: I1112 20:45:44.800014 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nlwcd" podStartSLOduration=2.240363729 podStartE2EDuration="34.799993079s" podCreationTimestamp="2024-11-12 20:45:10 +0000 UTC" firstStartedPulling="2024-11-12 20:45:10.52470739 +0000 UTC m=+27.281458636" lastFinishedPulling="2024-11-12 20:45:43.08433674 +0000 UTC m=+59.841087986" observedRunningTime="2024-11-12 20:45:44.798527991 +0000 UTC m=+61.555279237" watchObservedRunningTime="2024-11-12 20:45:44.799993079 +0000 UTC m=+61.556744345" Nov 12 20:45:45.332245 containerd[1478]: time="2024-11-12T20:45:45.331612971Z" level=info msg="StopPodSandbox for \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\"" Nov 12 20:45:45.332245 containerd[1478]: time="2024-11-12T20:45:45.331838108Z" level=info msg="StopPodSandbox for \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\"" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.485 [INFO][4099] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.486 [INFO][4099] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" iface="eth0" netns="/var/run/netns/cni-c41d1b42-1d30-bc3a-9cf7-32be9be25b61" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.486 [INFO][4099] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" iface="eth0" netns="/var/run/netns/cni-c41d1b42-1d30-bc3a-9cf7-32be9be25b61" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.488 [INFO][4099] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" iface="eth0" netns="/var/run/netns/cni-c41d1b42-1d30-bc3a-9cf7-32be9be25b61" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.488 [INFO][4099] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.488 [INFO][4099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.540 [INFO][4109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" HandleID="k8s-pod-network.48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" Workload="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.540 [INFO][4109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.540 [INFO][4109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.547 [WARNING][4109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" HandleID="k8s-pod-network.48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" Workload="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.547 [INFO][4109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" HandleID="k8s-pod-network.48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" Workload="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.550 [INFO][4109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:45.555062 containerd[1478]: 2024-11-12 20:45:45.553 [INFO][4099] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058" Nov 12 20:45:45.556502 containerd[1478]: time="2024-11-12T20:45:45.555274659Z" level=info msg="TearDown network for sandbox \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\" successfully" Nov 12 20:45:45.556502 containerd[1478]: time="2024-11-12T20:45:45.555307261Z" level=info msg="StopPodSandbox for \"48b5d40660342f5f5b587c91302760111e46e28d3268525e211fc11c5d48f058\" returns successfully" Nov 12 20:45:45.556502 containerd[1478]: time="2024-11-12T20:45:45.556192300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868b9f8d6-cpnj4,Uid:40b8e542-9add-41b9-aa96-e7a054affecb,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:45:45.559290 systemd[1]: run-netns-cni\x2dc41d1b42\x2d1d30\x2dbc3a\x2d9cf7\x2d32be9be25b61.mount: Deactivated successfully. Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.488 [INFO][4094] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.488 [INFO][4094] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" iface="eth0" netns="/var/run/netns/cni-db0171ba-8d8e-db43-926c-56f13cdaa8e0" Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.489 [INFO][4094] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" iface="eth0" netns="/var/run/netns/cni-db0171ba-8d8e-db43-926c-56f13cdaa8e0" Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.489 [INFO][4094] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" iface="eth0" netns="/var/run/netns/cni-db0171ba-8d8e-db43-926c-56f13cdaa8e0" Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.489 [INFO][4094] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.489 [INFO][4094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.540 [INFO][4110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" HandleID="k8s-pod-network.045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Workload="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.540 [INFO][4110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.550 [INFO][4110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.555 [WARNING][4110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" HandleID="k8s-pod-network.045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Workload="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.556 [INFO][4110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" HandleID="k8s-pod-network.045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Workload="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.557 [INFO][4110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:45.562912 containerd[1478]: 2024-11-12 20:45:45.560 [INFO][4094] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:45:45.563269 containerd[1478]: time="2024-11-12T20:45:45.563150877Z" level=info msg="TearDown network for sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\" successfully" Nov 12 20:45:45.563269 containerd[1478]: time="2024-11-12T20:45:45.563182286Z" level=info msg="StopPodSandbox for \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\" returns successfully" Nov 12 20:45:45.564196 containerd[1478]: time="2024-11-12T20:45:45.564168728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qwkzp,Uid:fb4c4a07-9a98-43af-84e7-91573664a62a,Namespace:calico-system,Attempt:1,}" Nov 12 20:45:45.566124 systemd[1]: run-netns-cni\x2ddb0171ba\x2d8d8e\x2ddb43\x2d926c\x2d56f13cdaa8e0.mount: Deactivated successfully. Nov 12 20:45:45.682064 kubelet[2665]: E1112 20:45:45.681934 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:45.718280 systemd-networkd[1404]: calia9b6ffa96df: Link UP Nov 12 20:45:45.718988 systemd-networkd[1404]: calia9b6ffa96df: Gained carrier Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.626 [INFO][4124] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.636 [INFO][4124] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qwkzp-eth0 csi-node-driver- calico-system fb4c4a07-9a98-43af-84e7-91573664a62a 948 0 2024-11-12 20:45:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85bdc57578 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qwkzp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia9b6ffa96df [] []}} ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Namespace="calico-system" Pod="csi-node-driver-qwkzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qwkzp-" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.636 [INFO][4124] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Namespace="calico-system" Pod="csi-node-driver-qwkzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.666 [INFO][4151] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" HandleID="k8s-pod-network.76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Workload="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.674 [INFO][4151] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" HandleID="k8s-pod-network.76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Workload="localhost-k8s-csi--node--driver--qwkzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f0d40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qwkzp", "timestamp":"2024-11-12 20:45:45.666571325 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.674 [INFO][4151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.674 [INFO][4151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.674 [INFO][4151] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.676 [INFO][4151] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" host="localhost" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.684 [INFO][4151] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.689 [INFO][4151] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.691 [INFO][4151] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.692 [INFO][4151] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.692 [INFO][4151] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" host="localhost" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.694 [INFO][4151] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.700 [INFO][4151] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" host="localhost" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.705 [INFO][4151] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" host="localhost" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.705 [INFO][4151] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" host="localhost" Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.705 [INFO][4151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:45.745215 containerd[1478]: 2024-11-12 20:45:45.705 [INFO][4151] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" HandleID="k8s-pod-network.76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Workload="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.746465 containerd[1478]: 2024-11-12 20:45:45.708 [INFO][4124] cni-plugin/k8s.go 386: Populated endpoint ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Namespace="calico-system" Pod="csi-node-driver-qwkzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qwkzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qwkzp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb4c4a07-9a98-43af-84e7-91573664a62a", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qwkzp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9b6ffa96df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:45.746465 containerd[1478]: 2024-11-12 20:45:45.708 [INFO][4124] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Namespace="calico-system" Pod="csi-node-driver-qwkzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.746465 containerd[1478]: 2024-11-12 20:45:45.708 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9b6ffa96df ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Namespace="calico-system" Pod="csi-node-driver-qwkzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.746465 containerd[1478]: 2024-11-12 20:45:45.719 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Namespace="calico-system" Pod="csi-node-driver-qwkzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.746465 containerd[1478]: 2024-11-12 20:45:45.720 [INFO][4124] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Namespace="calico-system" Pod="csi-node-driver-qwkzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qwkzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qwkzp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb4c4a07-9a98-43af-84e7-91573664a62a", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec", Pod:"csi-node-driver-qwkzp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9b6ffa96df", MAC:"ba:41:a0:b6:d3:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:45.746465 containerd[1478]: 2024-11-12 20:45:45.742 [INFO][4124] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec" Namespace="calico-system" Pod="csi-node-driver-qwkzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:45:45.940306 systemd-networkd[1404]: cali3941998a779: Link UP Nov 12 20:45:45.941431 systemd-networkd[1404]: cali3941998a779: Gained carrier Nov 12 20:45:46.005125 containerd[1478]: time="2024-11-12T20:45:46.005007727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:46.005125 containerd[1478]: time="2024-11-12T20:45:46.005100854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:46.005125 containerd[1478]: time="2024-11-12T20:45:46.005121693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:46.005458 containerd[1478]: time="2024-11-12T20:45:46.005298088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:46.029802 systemd[1]: Started cri-containerd-76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec.scope - libcontainer container 76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec. Nov 12 20:45:46.042718 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:45:46.054377 containerd[1478]: time="2024-11-12T20:45:46.054339712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qwkzp,Uid:fb4c4a07-9a98-43af-84e7-91573664a62a,Namespace:calico-system,Attempt:1,} returns sandbox id \"76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec\"" Nov 12 20:45:46.056184 containerd[1478]: time="2024-11-12T20:45:46.056168456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.633 [INFO][4135] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.645 [INFO][4135] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0 calico-apiserver-868b9f8d6- calico-apiserver 40b8e542-9add-41b9-aa96-e7a054affecb 947 0 2024-11-12 20:45:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:868b9f8d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-868b9f8d6-cpnj4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3941998a779 [] []}} ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-cpnj4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.645 [INFO][4135] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-cpnj4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.676 [INFO][4157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" HandleID="k8s-pod-network.96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Workload="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.690 [INFO][4157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" HandleID="k8s-pod-network.96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Workload="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-868b9f8d6-cpnj4", "timestamp":"2024-11-12 20:45:45.676361835 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.690 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.705 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.705 [INFO][4157] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.709 [INFO][4157] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" host="localhost" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.715 [INFO][4157] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.723 [INFO][4157] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.725 [INFO][4157] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.741 [INFO][4157] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.741 [INFO][4157] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" host="localhost" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.748 [INFO][4157] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.796 [INFO][4157] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" host="localhost" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.934 [INFO][4157] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" host="localhost" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.934 [INFO][4157] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" host="localhost" Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.934 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:46.108210 containerd[1478]: 2024-11-12 20:45:45.934 [INFO][4157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" HandleID="k8s-pod-network.96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Workload="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:46.108829 containerd[1478]: 2024-11-12 20:45:45.937 [INFO][4135] cni-plugin/k8s.go 386: Populated endpoint ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-cpnj4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0", GenerateName:"calico-apiserver-868b9f8d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"40b8e542-9add-41b9-aa96-e7a054affecb", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868b9f8d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-868b9f8d6-cpnj4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3941998a779", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:46.108829 containerd[1478]: 2024-11-12 20:45:45.937 [INFO][4135] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-cpnj4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:46.108829 containerd[1478]: 2024-11-12 20:45:45.937 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3941998a779 ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-cpnj4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:46.108829 containerd[1478]: 2024-11-12 20:45:45.939 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-cpnj4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:46.108829 containerd[1478]: 2024-11-12 20:45:45.940 [INFO][4135] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-cpnj4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0", GenerateName:"calico-apiserver-868b9f8d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"40b8e542-9add-41b9-aa96-e7a054affecb", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868b9f8d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f", Pod:"calico-apiserver-868b9f8d6-cpnj4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3941998a779", MAC:"d6:ce:0e:39:fd:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:46.108829 containerd[1478]: 2024-11-12 20:45:46.105 [INFO][4135] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-cpnj4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--cpnj4-eth0" Nov 12 20:45:46.143323 containerd[1478]: time="2024-11-12T20:45:46.143172478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:46.143323 containerd[1478]: time="2024-11-12T20:45:46.143249354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:46.143323 containerd[1478]: time="2024-11-12T20:45:46.143265123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:46.143713 containerd[1478]: time="2024-11-12T20:45:46.143368390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:46.168904 systemd[1]: Started cri-containerd-96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f.scope - libcontainer container 96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f. Nov 12 20:45:46.183548 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:45:46.210503 containerd[1478]: time="2024-11-12T20:45:46.210360044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868b9f8d6-cpnj4,Uid:40b8e542-9add-41b9-aa96-e7a054affecb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f\"" Nov 12 20:45:46.553655 kernel: bpftool[4425]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:45:46.789874 systemd-networkd[1404]: vxlan.calico: Link UP Nov 12 20:45:46.789887 systemd-networkd[1404]: vxlan.calico: Gained carrier Nov 12 20:45:47.237831 systemd-networkd[1404]: calia9b6ffa96df: Gained IPv6LL Nov 12 20:45:47.331885 containerd[1478]: time="2024-11-12T20:45:47.331801934Z" level=info msg="StopPodSandbox for \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\"" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.515 [INFO][4516] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.515 [INFO][4516] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" iface="eth0" netns="/var/run/netns/cni-f6332072-7015-fa3e-b4d0-ddf96dfa50bc" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.515 [INFO][4516] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" iface="eth0" netns="/var/run/netns/cni-f6332072-7015-fa3e-b4d0-ddf96dfa50bc" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.515 [INFO][4516] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" iface="eth0" netns="/var/run/netns/cni-f6332072-7015-fa3e-b4d0-ddf96dfa50bc" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.515 [INFO][4516] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.515 [INFO][4516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.540 [INFO][4523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" HandleID="k8s-pod-network.02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" Workload="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.540 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.540 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.547 [WARNING][4523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" HandleID="k8s-pod-network.02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" Workload="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.547 [INFO][4523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" HandleID="k8s-pod-network.02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" Workload="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.548 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:47.555048 containerd[1478]: 2024-11-12 20:45:47.552 [INFO][4516] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331" Nov 12 20:45:47.555666 containerd[1478]: time="2024-11-12T20:45:47.555199098Z" level=info msg="TearDown network for sandbox \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\" successfully" Nov 12 20:45:47.555666 containerd[1478]: time="2024-11-12T20:45:47.555229616Z" level=info msg="StopPodSandbox for \"02c6c01d3b294e7a86ad9ede461b1d6030611a7aef05889b5d68c3e6584ee331\" returns successfully" Nov 12 20:45:47.556117 containerd[1478]: time="2024-11-12T20:45:47.556073471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868b9f8d6-nv4z7,Uid:5baef4de-6c0a-45e7-ba9b-68f67c3817e2,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:45:47.558591 systemd[1]: run-netns-cni\x2df6332072\x2d7015\x2dfa3e\x2db4d0\x2dddf96dfa50bc.mount: Deactivated successfully. Nov 12 20:45:47.628423 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:56696.service - OpenSSH per-connection server daemon (10.0.0.1:56696). Nov 12 20:45:47.670206 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 56696 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:47.671870 sshd[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:47.677064 systemd-logind[1451]: New session 15 of user core. Nov 12 20:45:47.684822 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:45:47.749837 systemd-networkd[1404]: cali3941998a779: Gained IPv6LL Nov 12 20:45:47.843874 sshd[4532]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:47.855343 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:56696.service: Deactivated successfully. Nov 12 20:45:47.858119 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:45:47.860486 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:45:47.871148 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:56708.service - OpenSSH per-connection server daemon (10.0.0.1:56708). Nov 12 20:45:47.872313 systemd-logind[1451]: Removed session 15. Nov 12 20:45:47.907034 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 56708 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:47.909205 sshd[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:47.914958 systemd-logind[1451]: New session 16 of user core. Nov 12 20:45:47.917389 systemd-networkd[1404]: cali9e3b0009da1: Link UP Nov 12 20:45:47.918057 systemd-networkd[1404]: cali9e3b0009da1: Gained carrier Nov 12 20:45:47.919873 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.834 [INFO][4544] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0 calico-apiserver-868b9f8d6- calico-apiserver 5baef4de-6c0a-45e7-ba9b-68f67c3817e2 963 0 2024-11-12 20:45:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:868b9f8d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-868b9f8d6-nv4z7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e3b0009da1 [] []}} ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-nv4z7" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.834 [INFO][4544] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-nv4z7" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.871 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" HandleID="k8s-pod-network.26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Workload="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.881 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" HandleID="k8s-pod-network.26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Workload="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-868b9f8d6-nv4z7", "timestamp":"2024-11-12 20:45:47.871012799 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.881 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.882 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.882 [INFO][4560] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.884 [INFO][4560] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" host="localhost" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.888 [INFO][4560] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.894 [INFO][4560] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.896 [INFO][4560] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.898 [INFO][4560] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.898 [INFO][4560] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" host="localhost" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.899 [INFO][4560] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.904 [INFO][4560] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" host="localhost" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.911 [INFO][4560] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" host="localhost" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.911 [INFO][4560] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" host="localhost" Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.911 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:47.932128 containerd[1478]: 2024-11-12 20:45:47.911 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" HandleID="k8s-pod-network.26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Workload="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.933130 containerd[1478]: 2024-11-12 20:45:47.915 [INFO][4544] cni-plugin/k8s.go 386: Populated endpoint ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-nv4z7" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0", GenerateName:"calico-apiserver-868b9f8d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5baef4de-6c0a-45e7-ba9b-68f67c3817e2", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868b9f8d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-868b9f8d6-nv4z7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e3b0009da1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:47.933130 containerd[1478]: 2024-11-12 20:45:47.915 [INFO][4544] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-nv4z7" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.933130 containerd[1478]: 2024-11-12 20:45:47.915 [INFO][4544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e3b0009da1 ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-nv4z7" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.933130 containerd[1478]: 2024-11-12 20:45:47.917 [INFO][4544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-nv4z7" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.933130 containerd[1478]: 2024-11-12 20:45:47.918 [INFO][4544] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-nv4z7" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0", GenerateName:"calico-apiserver-868b9f8d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5baef4de-6c0a-45e7-ba9b-68f67c3817e2", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868b9f8d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf", Pod:"calico-apiserver-868b9f8d6-nv4z7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e3b0009da1", MAC:"ca:76:68:90:a6:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:47.933130 containerd[1478]: 2024-11-12 20:45:47.928 [INFO][4544] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf" Namespace="calico-apiserver" Pod="calico-apiserver-868b9f8d6-nv4z7" WorkloadEndpoint="localhost-k8s-calico--apiserver--868b9f8d6--nv4z7-eth0" Nov 12 20:45:47.959917 containerd[1478]: time="2024-11-12T20:45:47.959754377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:47.959917 containerd[1478]: time="2024-11-12T20:45:47.959830242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:47.960845 containerd[1478]: time="2024-11-12T20:45:47.960603813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:47.960845 containerd[1478]: time="2024-11-12T20:45:47.960763106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:47.994889 systemd[1]: Started cri-containerd-26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf.scope - libcontainer container 26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf. Nov 12 20:45:48.011463 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:45:48.046229 containerd[1478]: time="2024-11-12T20:45:48.046140719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868b9f8d6-nv4z7,Uid:5baef4de-6c0a-45e7-ba9b-68f67c3817e2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf\"" Nov 12 20:45:48.069912 systemd-networkd[1404]: vxlan.calico: Gained IPv6LL Nov 12 20:45:48.097242 sshd[4569]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:48.105739 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:56708.service: Deactivated successfully. Nov 12 20:45:48.108397 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:45:48.110725 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:45:48.123998 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:56722.service - OpenSSH per-connection server daemon (10.0.0.1:56722). Nov 12 20:45:48.127326 systemd-logind[1451]: Removed session 16. Nov 12 20:45:48.178691 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 56722 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:48.183066 sshd[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:48.190669 systemd-logind[1451]: New session 17 of user core. Nov 12 20:45:48.200881 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:45:48.345473 sshd[4634]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:48.349973 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:56722.service: Deactivated successfully. Nov 12 20:45:48.352456 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:45:48.353384 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:45:48.354415 systemd-logind[1451]: Removed session 17. Nov 12 20:45:48.357399 containerd[1478]: time="2024-11-12T20:45:48.357332083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:48.359222 containerd[1478]: time="2024-11-12T20:45:48.359155925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:45:48.360657 containerd[1478]: time="2024-11-12T20:45:48.360605905Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:48.363254 containerd[1478]: time="2024-11-12T20:45:48.363180174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:48.363969 containerd[1478]: time="2024-11-12T20:45:48.363856942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 2.307607992s" Nov 12 20:45:48.363969 containerd[1478]: time="2024-11-12T20:45:48.363913100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:45:48.365255 containerd[1478]: time="2024-11-12T20:45:48.365070072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:45:48.366610 containerd[1478]: time="2024-11-12T20:45:48.366568172Z" level=info msg="CreateContainer within sandbox \"76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:45:48.395286 containerd[1478]: time="2024-11-12T20:45:48.395215760Z" level=info msg="CreateContainer within sandbox \"76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f1074e43d73d268fcd9f6017e9dd0049f6a6cc1bf42e487b04f9b589e3cff75c\"" Nov 12 20:45:48.396106 containerd[1478]: time="2024-11-12T20:45:48.396082460Z" level=info msg="StartContainer for \"f1074e43d73d268fcd9f6017e9dd0049f6a6cc1bf42e487b04f9b589e3cff75c\"" Nov 12 20:45:48.433839 systemd[1]: Started cri-containerd-f1074e43d73d268fcd9f6017e9dd0049f6a6cc1bf42e487b04f9b589e3cff75c.scope - libcontainer container f1074e43d73d268fcd9f6017e9dd0049f6a6cc1bf42e487b04f9b589e3cff75c. Nov 12 20:45:48.470950 containerd[1478]: time="2024-11-12T20:45:48.470873565Z" level=info msg="StartContainer for \"f1074e43d73d268fcd9f6017e9dd0049f6a6cc1bf42e487b04f9b589e3cff75c\" returns successfully" Nov 12 20:45:49.285853 systemd-networkd[1404]: cali9e3b0009da1: Gained IPv6LL Nov 12 20:45:52.987395 containerd[1478]: time="2024-11-12T20:45:52.973148873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:52.988182 containerd[1478]: time="2024-11-12T20:45:52.976886753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:45:52.989526 containerd[1478]: time="2024-11-12T20:45:52.989458797Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:52.990311 containerd[1478]: time="2024-11-12T20:45:52.990268724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:52.992012 containerd[1478]: time="2024-11-12T20:45:52.991966286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 4.626859685s" Nov 12 20:45:52.992012 containerd[1478]: time="2024-11-12T20:45:52.992006773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:45:52.993637 containerd[1478]: time="2024-11-12T20:45:52.993564639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:45:52.995051 containerd[1478]: time="2024-11-12T20:45:52.994996454Z" level=info msg="CreateContainer within sandbox \"96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:45:53.016032 containerd[1478]: time="2024-11-12T20:45:53.015964358Z" level=info msg="CreateContainer within sandbox \"96d9f53f5121aa5d1d39650e3e43a1a72b7845e4183bc7b7e8cced536aac5e9f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b435163fd82a0352b0306dbdd280148c06765d10f8d50fe8c0b70a2b8cd47a30\"" Nov 12 20:45:53.016729 containerd[1478]: time="2024-11-12T20:45:53.016689434Z" level=info msg="StartContainer for \"b435163fd82a0352b0306dbdd280148c06765d10f8d50fe8c0b70a2b8cd47a30\"" Nov 12 20:45:53.060098 systemd[1]: Started cri-containerd-b435163fd82a0352b0306dbdd280148c06765d10f8d50fe8c0b70a2b8cd47a30.scope - libcontainer container b435163fd82a0352b0306dbdd280148c06765d10f8d50fe8c0b70a2b8cd47a30. Nov 12 20:45:53.205062 containerd[1478]: time="2024-11-12T20:45:53.205001564Z" level=info msg="StartContainer for \"b435163fd82a0352b0306dbdd280148c06765d10f8d50fe8c0b70a2b8cd47a30\" returns successfully" Nov 12 20:45:53.331250 kubelet[2665]: E1112 20:45:53.331201 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:53.354253 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:56728.service - OpenSSH per-connection server daemon (10.0.0.1:56728). Nov 12 20:45:53.401358 sshd[4744]: Accepted publickey for core from 10.0.0.1 port 56728 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:53.403543 sshd[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:53.408259 systemd-logind[1451]: New session 18 of user core. Nov 12 20:45:53.414823 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:45:53.481746 containerd[1478]: time="2024-11-12T20:45:53.481674235Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:53.484133 containerd[1478]: time="2024-11-12T20:45:53.482783576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:45:53.485912 containerd[1478]: time="2024-11-12T20:45:53.485841863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 492.199315ms" Nov 12 20:45:53.485912 containerd[1478]: time="2024-11-12T20:45:53.485907548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:45:53.489124 containerd[1478]: time="2024-11-12T20:45:53.488128815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:45:53.489491 containerd[1478]: time="2024-11-12T20:45:53.489454679Z" level=info msg="CreateContainer within sandbox \"26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:45:53.527826 containerd[1478]: time="2024-11-12T20:45:53.527755274Z" level=info msg="CreateContainer within sandbox \"26265aa84bfa9986133510d9a8fc5d370d5d374e34715e621e7448c1074486cf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d7287760db888a3c52ae7979a5b4da161aa28624762670098e6ee4ee2bbaf829\"" Nov 12 20:45:53.531963 containerd[1478]: time="2024-11-12T20:45:53.530461949Z" level=info msg="StartContainer for \"d7287760db888a3c52ae7979a5b4da161aa28624762670098e6ee4ee2bbaf829\"" Nov 12 20:45:53.572829 systemd[1]: Started cri-containerd-d7287760db888a3c52ae7979a5b4da161aa28624762670098e6ee4ee2bbaf829.scope - libcontainer container d7287760db888a3c52ae7979a5b4da161aa28624762670098e6ee4ee2bbaf829. Nov 12 20:45:53.596187 sshd[4744]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:53.601050 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:56728.service: Deactivated successfully. Nov 12 20:45:53.603810 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:45:53.605463 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:45:53.608925 systemd-logind[1451]: Removed session 18. Nov 12 20:45:54.076492 containerd[1478]: time="2024-11-12T20:45:54.076440792Z" level=info msg="StartContainer for \"d7287760db888a3c52ae7979a5b4da161aa28624762670098e6ee4ee2bbaf829\" returns successfully" Nov 12 20:45:54.211597 kubelet[2665]: I1112 20:45:54.211518 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-868b9f8d6-nv4z7" podStartSLOduration=38.772785189 podStartE2EDuration="44.211493142s" podCreationTimestamp="2024-11-12 20:45:10 +0000 UTC" firstStartedPulling="2024-11-12 20:45:48.048425348 +0000 UTC m=+64.805176594" lastFinishedPulling="2024-11-12 20:45:53.487133301 +0000 UTC m=+70.243884547" observedRunningTime="2024-11-12 20:45:54.192492372 +0000 UTC m=+70.949243638" watchObservedRunningTime="2024-11-12 20:45:54.211493142 +0000 UTC m=+70.968244388" Nov 12 20:45:54.212413 kubelet[2665]: I1112 20:45:54.212364 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-868b9f8d6-cpnj4" podStartSLOduration=37.431538456 podStartE2EDuration="44.21235382s" podCreationTimestamp="2024-11-12 20:45:10 +0000 UTC" firstStartedPulling="2024-11-12 20:45:46.212402353 +0000 UTC m=+62.969153599" lastFinishedPulling="2024-11-12 20:45:52.993217707 +0000 UTC m=+69.749968963" observedRunningTime="2024-11-12 20:45:54.21197784 +0000 UTC m=+70.968729116" watchObservedRunningTime="2024-11-12 20:45:54.21235382 +0000 UTC m=+70.969105086" Nov 12 20:45:55.082547 kubelet[2665]: I1112 20:45:55.082509 2665 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:45:56.331179 containerd[1478]: time="2024-11-12T20:45:56.331119285Z" level=info msg="StopPodSandbox for \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\"" Nov 12 20:45:56.331731 containerd[1478]: time="2024-11-12T20:45:56.331116319Z" level=info msg="StopPodSandbox for \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\"" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.596 [INFO][4841] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.597 [INFO][4841] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" iface="eth0" netns="/var/run/netns/cni-0428c5f9-da34-49a8-123b-0d7ce1741d16" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.597 [INFO][4841] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" iface="eth0" netns="/var/run/netns/cni-0428c5f9-da34-49a8-123b-0d7ce1741d16" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.597 [INFO][4841] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" iface="eth0" netns="/var/run/netns/cni-0428c5f9-da34-49a8-123b-0d7ce1741d16" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.597 [INFO][4841] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.597 [INFO][4841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.624 [INFO][4859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" HandleID="k8s-pod-network.ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Workload="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.624 [INFO][4859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.624 [INFO][4859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.632 [WARNING][4859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" HandleID="k8s-pod-network.ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Workload="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.632 [INFO][4859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" HandleID="k8s-pod-network.ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Workload="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.634 [INFO][4859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:56.641011 containerd[1478]: 2024-11-12 20:45:56.637 [INFO][4841] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846" Nov 12 20:45:56.643459 containerd[1478]: time="2024-11-12T20:45:56.642192730Z" level=info msg="TearDown network for sandbox \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\" successfully" Nov 12 20:45:56.643459 containerd[1478]: time="2024-11-12T20:45:56.642231635Z" level=info msg="StopPodSandbox for \"ab347010296929d1cad6f713b772f37d614d3ec07d6b27fd9cae67eca1733846\" returns successfully" Nov 12 20:45:56.643548 kubelet[2665]: E1112 20:45:56.642614 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:56.645146 containerd[1478]: time="2024-11-12T20:45:56.645102676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x8dc6,Uid:e89180de-1bfe-48ec-9535-6f9b7004bbe3,Namespace:kube-system,Attempt:1,}" Nov 12 20:45:56.646384 systemd[1]: run-netns-cni\x2d0428c5f9\x2dda34\x2d49a8\x2d123b\x2d0d7ce1741d16.mount: Deactivated successfully. Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.596 [INFO][4842] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.596 [INFO][4842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" iface="eth0" netns="/var/run/netns/cni-0302ac16-70c5-4773-df70-de5db13969b6" Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.597 [INFO][4842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" iface="eth0" netns="/var/run/netns/cni-0302ac16-70c5-4773-df70-de5db13969b6" Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.598 [INFO][4842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" iface="eth0" netns="/var/run/netns/cni-0302ac16-70c5-4773-df70-de5db13969b6" Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.598 [INFO][4842] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.598 [INFO][4842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.631 [INFO][4860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" HandleID="k8s-pod-network.1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.631 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.635 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.644 [WARNING][4860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" HandleID="k8s-pod-network.1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.644 [INFO][4860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" HandleID="k8s-pod-network.1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.655 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:56.661343 containerd[1478]: 2024-11-12 20:45:56.658 [INFO][4842] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:45:56.664300 containerd[1478]: time="2024-11-12T20:45:56.662907849Z" level=info msg="TearDown network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\" successfully" Nov 12 20:45:56.664300 containerd[1478]: time="2024-11-12T20:45:56.662944911Z" level=info msg="StopPodSandbox for \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\" returns successfully" Nov 12 20:45:56.664300 containerd[1478]: time="2024-11-12T20:45:56.663571120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c846d996f-fnghq,Uid:f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50,Namespace:calico-system,Attempt:1,}" Nov 12 20:45:56.665089 systemd[1]: run-netns-cni\x2d0302ac16\x2d70c5\x2d4773\x2ddf70\x2dde5db13969b6.mount: Deactivated successfully. Nov 12 20:45:57.201578 containerd[1478]: time="2024-11-12T20:45:57.201530766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:57.207728 containerd[1478]: time="2024-11-12T20:45:57.207658434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:45:57.208926 containerd[1478]: time="2024-11-12T20:45:57.208877242Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:57.215647 containerd[1478]: time="2024-11-12T20:45:57.214181742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:57.216928 containerd[1478]: time="2024-11-12T20:45:57.216883061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 3.728712126s" Nov 12 20:45:57.217068 containerd[1478]: time="2024-11-12T20:45:57.216927366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:45:57.223206 containerd[1478]: time="2024-11-12T20:45:57.223146270Z" level=info msg="CreateContainer within sandbox \"76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:45:57.265861 containerd[1478]: time="2024-11-12T20:45:57.265806904Z" level=info msg="CreateContainer within sandbox \"76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"557c135f12fd3e27cee16b00bffd8ee61396478137be75c3daa2346f43e26956\"" Nov 12 20:45:57.266427 containerd[1478]: time="2024-11-12T20:45:57.266390673Z" level=info msg="StartContainer for \"557c135f12fd3e27cee16b00bffd8ee61396478137be75c3daa2346f43e26956\"" Nov 12 20:45:57.308212 systemd[1]: Started cri-containerd-557c135f12fd3e27cee16b00bffd8ee61396478137be75c3daa2346f43e26956.scope - libcontainer container 557c135f12fd3e27cee16b00bffd8ee61396478137be75c3daa2346f43e26956. Nov 12 20:45:57.332851 containerd[1478]: time="2024-11-12T20:45:57.332810411Z" level=info msg="StopPodSandbox for \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\"" Nov 12 20:45:57.428900 containerd[1478]: time="2024-11-12T20:45:57.428832156Z" level=info msg="StartContainer for \"557c135f12fd3e27cee16b00bffd8ee61396478137be75c3daa2346f43e26956\" returns successfully" Nov 12 20:45:57.430085 kubelet[2665]: I1112 20:45:57.430048 2665 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:45:57.430175 kubelet[2665]: I1112 20:45:57.430106 2665 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:45:57.432764 systemd-networkd[1404]: caliac20f3a805b: Link UP Nov 12 20:45:57.434220 systemd-networkd[1404]: caliac20f3a805b: Gained carrier Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.235 [INFO][4883] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0 coredns-7db6d8ff4d- kube-system e89180de-1bfe-48ec-9535-6f9b7004bbe3 1046 0 2024-11-12 20:44:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-x8dc6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac20f3a805b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x8dc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x8dc6-" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.236 [INFO][4883] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x8dc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.274 [INFO][4911] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" HandleID="k8s-pod-network.6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Workload="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.288 [INFO][4911] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" HandleID="k8s-pod-network.6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Workload="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003754f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-x8dc6", "timestamp":"2024-11-12 20:45:57.274502747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.288 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.288 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.288 [INFO][4911] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.290 [INFO][4911] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" host="localhost" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.294 [INFO][4911] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.304 [INFO][4911] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.306 [INFO][4911] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.308 [INFO][4911] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.308 [INFO][4911] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" host="localhost" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.310 [INFO][4911] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6 Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.328 [INFO][4911] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" host="localhost" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.426 [INFO][4911] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" host="localhost" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.426 [INFO][4911] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" host="localhost" Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.426 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:57.677902 containerd[1478]: 2024-11-12 20:45:57.426 [INFO][4911] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" HandleID="k8s-pod-network.6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Workload="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:57.678461 containerd[1478]: 2024-11-12 20:45:57.428 [INFO][4883] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x8dc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e89180de-1bfe-48ec-9535-6f9b7004bbe3", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-x8dc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac20f3a805b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:57.678461 containerd[1478]: 2024-11-12 20:45:57.428 [INFO][4883] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x8dc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:57.678461 containerd[1478]: 2024-11-12 20:45:57.428 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac20f3a805b ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x8dc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:57.678461 containerd[1478]: 2024-11-12 20:45:57.433 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x8dc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:57.678461 containerd[1478]: 2024-11-12 20:45:57.434 [INFO][4883] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x8dc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e89180de-1bfe-48ec-9535-6f9b7004bbe3", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6", Pod:"coredns-7db6d8ff4d-x8dc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac20f3a805b", MAC:"2e:f9:cb:05:6d:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:57.678461 containerd[1478]: 2024-11-12 20:45:57.675 [INFO][4883] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x8dc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x8dc6-eth0" Nov 12 20:45:57.763094 containerd[1478]: time="2024-11-12T20:45:57.762929188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:57.763094 containerd[1478]: time="2024-11-12T20:45:57.763027617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:57.763094 containerd[1478]: time="2024-11-12T20:45:57.763042245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:57.764349 containerd[1478]: time="2024-11-12T20:45:57.764098280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:57.785875 systemd[1]: Started cri-containerd-6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6.scope - libcontainer container 6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6. Nov 12 20:45:57.798396 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:45:57.824647 containerd[1478]: time="2024-11-12T20:45:57.824587919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x8dc6,Uid:e89180de-1bfe-48ec-9535-6f9b7004bbe3,Namespace:kube-system,Attempt:1,} returns sandbox id \"6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6\"" Nov 12 20:45:57.825685 kubelet[2665]: E1112 20:45:57.825608 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:57.827186 containerd[1478]: time="2024-11-12T20:45:57.827114132Z" level=info msg="CreateContainer within sandbox \"6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:45:57.984596 systemd-networkd[1404]: cali7b94d7a1acd: Link UP Nov 12 20:45:57.985008 systemd-networkd[1404]: cali7b94d7a1acd: Gained carrier Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.256 [INFO][4894] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0 calico-kube-controllers-7c846d996f- calico-system f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50 1045 0 2024-11-12 20:45:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c846d996f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c846d996f-fnghq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7b94d7a1acd [] []}} ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Namespace="calico-system" Pod="calico-kube-controllers-7c846d996f-fnghq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.256 [INFO][4894] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Namespace="calico-system" Pod="calico-kube-controllers-7c846d996f-fnghq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.297 [INFO][4917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" HandleID="k8s-pod-network.9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.306 [INFO][4917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" HandleID="k8s-pod-network.9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c846d996f-fnghq", "timestamp":"2024-11-12 20:45:57.297081147 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.307 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.426 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.426 [INFO][4917] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.674 [INFO][4917] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" host="localhost" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.753 [INFO][4917] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.888 [INFO][4917] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.890 [INFO][4917] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.906 [INFO][4917] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.906 [INFO][4917] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" host="localhost" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.908 [INFO][4917] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277 Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.923 [INFO][4917] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" host="localhost" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.979 [INFO][4917] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" host="localhost" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.979 [INFO][4917] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" host="localhost" Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.979 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:58.470500 containerd[1478]: 2024-11-12 20:45:57.979 [INFO][4917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" HandleID="k8s-pod-network.9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:58.546385 containerd[1478]: 2024-11-12 20:45:57.981 [INFO][4894] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Namespace="calico-system" Pod="calico-kube-controllers-7c846d996f-fnghq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0", GenerateName:"calico-kube-controllers-7c846d996f-", Namespace:"calico-system", SelfLink:"", UID:"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c846d996f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c846d996f-fnghq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b94d7a1acd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:58.546385 containerd[1478]: 2024-11-12 20:45:57.981 [INFO][4894] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Namespace="calico-system" Pod="calico-kube-controllers-7c846d996f-fnghq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:58.546385 containerd[1478]: 2024-11-12 20:45:57.981 [INFO][4894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b94d7a1acd ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Namespace="calico-system" Pod="calico-kube-controllers-7c846d996f-fnghq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:58.546385 containerd[1478]: 2024-11-12 20:45:57.985 [INFO][4894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Namespace="calico-system" Pod="calico-kube-controllers-7c846d996f-fnghq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:58.546385 containerd[1478]: 2024-11-12 20:45:57.985 [INFO][4894] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Namespace="calico-system" Pod="calico-kube-controllers-7c846d996f-fnghq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0", GenerateName:"calico-kube-controllers-7c846d996f-", Namespace:"calico-system", SelfLink:"", UID:"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c846d996f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277", Pod:"calico-kube-controllers-7c846d996f-fnghq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b94d7a1acd", MAC:"9a:c9:25:29:e2:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:58.546385 containerd[1478]: 2024-11-12 20:45:58.295 [INFO][4894] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277" Namespace="calico-system" Pod="calico-kube-controllers-7c846d996f-fnghq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:57.707 [INFO][4973] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:57.707 [INFO][4973] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" iface="eth0" netns="/var/run/netns/cni-f6eff574-de2d-5977-d8ea-3d332f318700" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:57.708 [INFO][4973] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" iface="eth0" netns="/var/run/netns/cni-f6eff574-de2d-5977-d8ea-3d332f318700" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:57.708 [INFO][4973] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" iface="eth0" netns="/var/run/netns/cni-f6eff574-de2d-5977-d8ea-3d332f318700" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:57.708 [INFO][4973] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:57.708 [INFO][4973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:57.730 [INFO][5006] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" HandleID="k8s-pod-network.5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:57.730 [INFO][5006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:57.979 [INFO][5006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:58.223 [WARNING][5006] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" HandleID="k8s-pod-network.5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:58.223 [INFO][5006] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" HandleID="k8s-pod-network.5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:58.543 [INFO][5006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:58.549408 containerd[1478]: 2024-11-12 20:45:58.546 [INFO][4973] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:45:58.554752 containerd[1478]: time="2024-11-12T20:45:58.552927018Z" level=info msg="TearDown network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\" successfully" Nov 12 20:45:58.554752 containerd[1478]: time="2024-11-12T20:45:58.552988106Z" level=info msg="StopPodSandbox for \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\" returns successfully" Nov 12 20:45:58.554752 containerd[1478]: time="2024-11-12T20:45:58.554239246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7cws,Uid:ae276fed-5dad-46fe-8b4a-5f71fa73249a,Namespace:kube-system,Attempt:1,}" Nov 12 20:45:58.554847 kubelet[2665]: E1112 20:45:58.553645 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:58.553327 systemd[1]: run-netns-cni\x2df6eff574\x2dde2d\x2d5977\x2dd8ea\x2d3d332f318700.mount: Deactivated successfully. Nov 12 20:45:58.603555 kubelet[2665]: I1112 20:45:58.603469 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qwkzp" podStartSLOduration=37.439179698 podStartE2EDuration="48.603446808s" podCreationTimestamp="2024-11-12 20:45:10 +0000 UTC" firstStartedPulling="2024-11-12 20:45:46.055840704 +0000 UTC m=+62.812591950" lastFinishedPulling="2024-11-12 20:45:57.220107814 +0000 UTC m=+73.976859060" observedRunningTime="2024-11-12 20:45:58.603111855 +0000 UTC m=+75.359863091" watchObservedRunningTime="2024-11-12 20:45:58.603446808 +0000 UTC m=+75.360198054" Nov 12 20:45:58.609669 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:43430.service - OpenSSH per-connection server daemon (10.0.0.1:43430). Nov 12 20:45:58.620835 containerd[1478]: time="2024-11-12T20:45:58.620708858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:58.620835 containerd[1478]: time="2024-11-12T20:45:58.620771277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:58.620835 containerd[1478]: time="2024-11-12T20:45:58.620790825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:58.621131 containerd[1478]: time="2024-11-12T20:45:58.620924211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:58.629914 systemd-networkd[1404]: caliac20f3a805b: Gained IPv6LL Nov 12 20:45:58.644781 systemd[1]: Started cri-containerd-9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277.scope - libcontainer container 9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277. Nov 12 20:45:58.657759 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:45:58.683133 containerd[1478]: time="2024-11-12T20:45:58.683090706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c846d996f-fnghq,Uid:f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50,Namespace:calico-system,Attempt:1,} returns sandbox id \"9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277\"" Nov 12 20:45:58.737523 containerd[1478]: time="2024-11-12T20:45:58.684529506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:45:58.736320 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:58.737987 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 43430 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:58.740588 systemd-logind[1451]: New session 19 of user core. Nov 12 20:45:58.746906 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:45:58.850273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609454375.mount: Deactivated successfully. Nov 12 20:45:58.853957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657041553.mount: Deactivated successfully. Nov 12 20:45:59.116855 sshd[5073]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:59.122297 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:43430.service: Deactivated successfully. Nov 12 20:45:59.124601 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:45:59.125290 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:45:59.126290 systemd-logind[1451]: Removed session 19. Nov 12 20:45:59.263376 containerd[1478]: time="2024-11-12T20:45:59.263273566Z" level=info msg="CreateContainer within sandbox \"6e0b6e354dbff71ba5a63c843e7d6b6b07ce51b40ab0aa05098bc604fbac54f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b496d0d3e14017e2f17e43f7e576a01c229af6a6d31c4883869092ad4998fbf0\"" Nov 12 20:45:59.264368 containerd[1478]: time="2024-11-12T20:45:59.264291490Z" level=info msg="StartContainer for \"b496d0d3e14017e2f17e43f7e576a01c229af6a6d31c4883869092ad4998fbf0\"" Nov 12 20:45:59.299323 systemd[1]: Started cri-containerd-b496d0d3e14017e2f17e43f7e576a01c229af6a6d31c4883869092ad4998fbf0.scope - libcontainer container b496d0d3e14017e2f17e43f7e576a01c229af6a6d31c4883869092ad4998fbf0. Nov 12 20:45:59.581087 containerd[1478]: time="2024-11-12T20:45:59.581042276Z" level=info msg="StartContainer for \"b496d0d3e14017e2f17e43f7e576a01c229af6a6d31c4883869092ad4998fbf0\" returns successfully" Nov 12 20:45:59.597229 systemd-networkd[1404]: cali9de66a15435: Link UP Nov 12 20:45:59.597490 systemd-networkd[1404]: cali9de66a15435: Gained carrier Nov 12 20:45:59.781843 systemd-networkd[1404]: cali7b94d7a1acd: Gained IPv6LL Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.347 [INFO][5141] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0 coredns-7db6d8ff4d- kube-system ae276fed-5dad-46fe-8b4a-5f71fa73249a 1059 0 2024-11-12 20:44:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-s7cws eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9de66a15435 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7cws" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s7cws-" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.347 [INFO][5141] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7cws" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.374 [INFO][5162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" HandleID="k8s-pod-network.66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.406 [INFO][5162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" HandleID="k8s-pod-network.66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-s7cws", "timestamp":"2024-11-12 20:45:59.374057742 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.406 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.407 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.407 [INFO][5162] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.408 [INFO][5162] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" host="localhost" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.412 [INFO][5162] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.416 [INFO][5162] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.417 [INFO][5162] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.419 [INFO][5162] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.419 [INFO][5162] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" host="localhost" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.421 [INFO][5162] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379 Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.451 [INFO][5162] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" host="localhost" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.590 [INFO][5162] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" host="localhost" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.590 [INFO][5162] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" host="localhost" Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.590 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:45:59.839617 containerd[1478]: 2024-11-12 20:45:59.590 [INFO][5162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" HandleID="k8s-pod-network.66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:59.840342 containerd[1478]: 2024-11-12 20:45:59.593 [INFO][5141] cni-plugin/k8s.go 386: Populated endpoint ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7cws" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ae276fed-5dad-46fe-8b4a-5f71fa73249a", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-s7cws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9de66a15435", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:59.840342 containerd[1478]: 2024-11-12 20:45:59.593 [INFO][5141] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7cws" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:59.840342 containerd[1478]: 2024-11-12 20:45:59.593 [INFO][5141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9de66a15435 ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7cws" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:59.840342 containerd[1478]: 2024-11-12 20:45:59.596 [INFO][5141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7cws" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:59.840342 containerd[1478]: 2024-11-12 20:45:59.596 [INFO][5141] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7cws" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ae276fed-5dad-46fe-8b4a-5f71fa73249a", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379", Pod:"coredns-7db6d8ff4d-s7cws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9de66a15435", MAC:"fa:a4:5e:2d:51:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:45:59.840342 containerd[1478]: 2024-11-12 20:45:59.836 [INFO][5141] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7cws" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:45:59.910504 containerd[1478]: time="2024-11-12T20:45:59.910394796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:59.910504 containerd[1478]: time="2024-11-12T20:45:59.910458088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:59.910504 containerd[1478]: time="2024-11-12T20:45:59.910474108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:59.910822 containerd[1478]: time="2024-11-12T20:45:59.910586844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:59.976765 systemd[1]: Started cri-containerd-66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379.scope - libcontainer container 66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379. Nov 12 20:45:59.989315 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:46:00.012115 containerd[1478]: time="2024-11-12T20:46:00.011696921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7cws,Uid:ae276fed-5dad-46fe-8b4a-5f71fa73249a,Namespace:kube-system,Attempt:1,} returns sandbox id \"66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379\"" Nov 12 20:46:00.012897 kubelet[2665]: E1112 20:46:00.012447 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:00.014952 containerd[1478]: time="2024-11-12T20:46:00.014760858Z" level=info msg="CreateContainer within sandbox \"66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:46:00.100516 kubelet[2665]: E1112 20:46:00.099529 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:00.120428 kubelet[2665]: I1112 20:46:00.119386 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x8dc6" podStartSLOduration=61.11936066 podStartE2EDuration="1m1.11936066s" podCreationTimestamp="2024-11-12 20:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:00.119094629 +0000 UTC m=+76.875845896" watchObservedRunningTime="2024-11-12 20:46:00.11936066 +0000 UTC m=+76.876111906" Nov 12 20:46:00.121145 containerd[1478]: time="2024-11-12T20:46:00.121101274Z" level=info msg="CreateContainer within sandbox \"66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b71ab009315ae05ff6892a8c9b0f7add499d63f8583cf8b583c85e3ed3e1ac77\"" Nov 12 20:46:00.126648 containerd[1478]: time="2024-11-12T20:46:00.124376616Z" level=info msg="StartContainer for \"b71ab009315ae05ff6892a8c9b0f7add499d63f8583cf8b583c85e3ed3e1ac77\"" Nov 12 20:46:00.190788 systemd[1]: Started cri-containerd-b71ab009315ae05ff6892a8c9b0f7add499d63f8583cf8b583c85e3ed3e1ac77.scope - libcontainer container b71ab009315ae05ff6892a8c9b0f7add499d63f8583cf8b583c85e3ed3e1ac77. Nov 12 20:46:00.238507 containerd[1478]: time="2024-11-12T20:46:00.238050933Z" level=info msg="StartContainer for \"b71ab009315ae05ff6892a8c9b0f7add499d63f8583cf8b583c85e3ed3e1ac77\" returns successfully" Nov 12 20:46:00.331432 kubelet[2665]: E1112 20:46:00.331387 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:01.103470 kubelet[2665]: E1112 20:46:01.103424 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:01.104707 kubelet[2665]: E1112 20:46:01.103570 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:01.253846 systemd-networkd[1404]: cali9de66a15435: Gained IPv6LL Nov 12 20:46:01.424947 kubelet[2665]: I1112 20:46:01.424270 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s7cws" podStartSLOduration=62.424244503 podStartE2EDuration="1m2.424244503s" podCreationTimestamp="2024-11-12 20:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:01.423969684 +0000 UTC m=+78.180720930" watchObservedRunningTime="2024-11-12 20:46:01.424244503 +0000 UTC m=+78.180995749" Nov 12 20:46:02.106927 kubelet[2665]: E1112 20:46:02.106880 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:02.107534 kubelet[2665]: E1112 20:46:02.107515 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:02.458600 containerd[1478]: time="2024-11-12T20:46:02.458446905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:02.505510 containerd[1478]: time="2024-11-12T20:46:02.505394199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:46:02.549739 containerd[1478]: time="2024-11-12T20:46:02.549616322Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:02.596936 containerd[1478]: time="2024-11-12T20:46:02.596871067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:46:02.597843 containerd[1478]: time="2024-11-12T20:46:02.597815253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 3.913259255s" Nov 12 20:46:02.597889 containerd[1478]: time="2024-11-12T20:46:02.597847375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:46:02.606572 containerd[1478]: time="2024-11-12T20:46:02.606519977Z" level=info msg="CreateContainer within sandbox \"9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:46:03.046148 containerd[1478]: time="2024-11-12T20:46:03.046060198Z" level=info msg="CreateContainer within sandbox \"9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"105c364425a7e80ffb498d15b7b2311de8e66f25c033c764eaa9381f70bee1e4\"" Nov 12 20:46:03.046762 containerd[1478]: time="2024-11-12T20:46:03.046717483Z" level=info msg="StartContainer for \"105c364425a7e80ffb498d15b7b2311de8e66f25c033c764eaa9381f70bee1e4\"" Nov 12 20:46:03.079800 systemd[1]: Started cri-containerd-105c364425a7e80ffb498d15b7b2311de8e66f25c033c764eaa9381f70bee1e4.scope - libcontainer container 105c364425a7e80ffb498d15b7b2311de8e66f25c033c764eaa9381f70bee1e4. Nov 12 20:46:03.111398 kubelet[2665]: E1112 20:46:03.111298 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:03.111398 kubelet[2665]: E1112 20:46:03.111320 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:03.135271 containerd[1478]: time="2024-11-12T20:46:03.135211331Z" level=info msg="StartContainer for \"105c364425a7e80ffb498d15b7b2311de8e66f25c033c764eaa9381f70bee1e4\" returns successfully" Nov 12 20:46:03.332137 kubelet[2665]: E1112 20:46:03.331974 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:04.143253 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:43438.service - OpenSSH per-connection server daemon (10.0.0.1:43438). Nov 12 20:46:04.195522 sshd[5356]: Accepted publickey for core from 10.0.0.1 port 43438 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:04.197445 sshd[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:04.202267 systemd-logind[1451]: New session 20 of user core. Nov 12 20:46:04.211136 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:46:04.211982 kubelet[2665]: I1112 20:46:04.211230 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c846d996f-fnghq" podStartSLOduration=50.296675392 podStartE2EDuration="54.211208788s" podCreationTimestamp="2024-11-12 20:45:10 +0000 UTC" firstStartedPulling="2024-11-12 20:45:58.684319593 +0000 UTC m=+75.441070829" lastFinishedPulling="2024-11-12 20:46:02.598852979 +0000 UTC m=+79.355604225" observedRunningTime="2024-11-12 20:46:04.210974997 +0000 UTC m=+80.967726243" watchObservedRunningTime="2024-11-12 20:46:04.211208788 +0000 UTC m=+80.967960034" Nov 12 20:46:04.570324 sshd[5356]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:04.574284 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:43438.service: Deactivated successfully. Nov 12 20:46:04.576505 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:46:04.577262 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:46:04.578202 systemd-logind[1451]: Removed session 20. Nov 12 20:46:09.582146 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:54316.service - OpenSSH per-connection server daemon (10.0.0.1:54316). Nov 12 20:46:09.621038 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 54316 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:09.622772 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:09.627189 systemd-logind[1451]: New session 21 of user core. Nov 12 20:46:09.635773 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:46:09.748873 sshd[5384]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:09.753507 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:54316.service: Deactivated successfully. Nov 12 20:46:09.756167 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:46:09.756940 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:46:09.757976 systemd-logind[1451]: Removed session 21. Nov 12 20:46:10.309267 kubelet[2665]: E1112 20:46:10.309214 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:14.762005 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:54322.service - OpenSSH per-connection server daemon (10.0.0.1:54322). Nov 12 20:46:15.290520 sshd[5443]: Accepted publickey for core from 10.0.0.1 port 54322 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:15.292254 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:15.296864 systemd-logind[1451]: New session 22 of user core. Nov 12 20:46:15.311796 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:46:15.331315 kubelet[2665]: E1112 20:46:15.331279 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:15.423392 sshd[5443]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:15.432593 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:54322.service: Deactivated successfully. Nov 12 20:46:15.434958 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:46:15.436743 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:46:15.444225 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:54338.service - OpenSSH per-connection server daemon (10.0.0.1:54338). Nov 12 20:46:15.445414 systemd-logind[1451]: Removed session 22. Nov 12 20:46:15.485413 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 54338 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:15.487696 sshd[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:15.492795 systemd-logind[1451]: New session 23 of user core. Nov 12 20:46:15.502808 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:46:16.260894 sshd[5457]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:16.275030 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:54338.service: Deactivated successfully. Nov 12 20:46:16.277131 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:46:16.278926 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:46:16.284870 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:42230.service - OpenSSH per-connection server daemon (10.0.0.1:42230). Nov 12 20:46:16.285864 systemd-logind[1451]: Removed session 23. Nov 12 20:46:16.321676 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 42230 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:16.323314 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:16.327722 systemd-logind[1451]: New session 24 of user core. Nov 12 20:46:16.335795 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:46:19.814568 sshd[5469]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:19.821486 systemd[1]: Started sshd@24-10.0.0.49:22-10.0.0.1:42242.service - OpenSSH per-connection server daemon (10.0.0.1:42242). Nov 12 20:46:19.914193 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:42230.service: Deactivated successfully. Nov 12 20:46:19.916783 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:46:19.917493 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:46:19.918450 systemd-logind[1451]: Removed session 24. Nov 12 20:46:19.943802 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 42242 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:19.945715 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:19.950687 systemd-logind[1451]: New session 25 of user core. Nov 12 20:46:19.959762 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:46:20.833745 sshd[5496]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:20.842601 systemd[1]: sshd@24-10.0.0.49:22-10.0.0.1:42242.service: Deactivated successfully. Nov 12 20:46:20.845421 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:46:20.848956 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:46:20.856312 systemd[1]: Started sshd@25-10.0.0.49:22-10.0.0.1:42248.service - OpenSSH per-connection server daemon (10.0.0.1:42248). Nov 12 20:46:20.858132 systemd-logind[1451]: Removed session 25. Nov 12 20:46:20.893551 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 42248 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:20.895969 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:20.901784 systemd-logind[1451]: New session 26 of user core. Nov 12 20:46:20.910112 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:46:21.071270 sshd[5513]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:21.075896 systemd[1]: sshd@25-10.0.0.49:22-10.0.0.1:42248.service: Deactivated successfully. Nov 12 20:46:21.078237 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:46:21.078931 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:46:21.079964 systemd-logind[1451]: Removed session 26. Nov 12 20:46:26.094192 systemd[1]: Started sshd@26-10.0.0.49:22-10.0.0.1:49108.service - OpenSSH per-connection server daemon (10.0.0.1:49108). Nov 12 20:46:26.166923 sshd[5527]: Accepted publickey for core from 10.0.0.1 port 49108 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:26.169354 sshd[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:26.174600 systemd-logind[1451]: New session 27 of user core. Nov 12 20:46:26.185924 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:46:26.308128 sshd[5527]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:26.312562 systemd[1]: sshd@26-10.0.0.49:22-10.0.0.1:49108.service: Deactivated successfully. Nov 12 20:46:26.315131 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:46:26.315883 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:46:26.316915 systemd-logind[1451]: Removed session 27. Nov 12 20:46:27.335280 kubelet[2665]: I1112 20:46:27.334501 2665 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:46:31.322400 systemd[1]: Started sshd@27-10.0.0.49:22-10.0.0.1:49118.service - OpenSSH per-connection server daemon (10.0.0.1:49118). Nov 12 20:46:31.361223 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 49118 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:31.364357 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:31.369401 systemd-logind[1451]: New session 28 of user core. Nov 12 20:46:31.370722 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:46:31.504661 sshd[5552]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:31.508842 systemd[1]: sshd@27-10.0.0.49:22-10.0.0.1:49118.service: Deactivated successfully. Nov 12 20:46:31.510705 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:46:31.511408 systemd-logind[1451]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:46:31.512438 systemd-logind[1451]: Removed session 28. Nov 12 20:46:33.945115 update_engine[1455]: I20241112 20:46:33.945039 1455 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 12 20:46:33.945115 update_engine[1455]: I20241112 20:46:33.945103 1455 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 12 20:46:33.945567 update_engine[1455]: I20241112 20:46:33.945552 1455 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 12 20:46:33.946115 update_engine[1455]: I20241112 20:46:33.946090 1455 omaha_request_params.cc:62] Current group set to stable Nov 12 20:46:33.946266 update_engine[1455]: I20241112 20:46:33.946230 1455 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 12 20:46:33.946266 update_engine[1455]: I20241112 20:46:33.946244 1455 update_attempter.cc:643] Scheduling an action processor start. Nov 12 20:46:33.946266 update_engine[1455]: I20241112 20:46:33.946262 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 12 20:46:33.946365 update_engine[1455]: I20241112 20:46:33.946297 1455 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 12 20:46:33.946365 update_engine[1455]: I20241112 20:46:33.946354 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 12 20:46:33.946365 update_engine[1455]: I20241112 20:46:33.946362 1455 omaha_request_action.cc:272] Request: Nov 12 20:46:33.946365 update_engine[1455]: Nov 12 20:46:33.946365 update_engine[1455]: Nov 12 20:46:33.946365 update_engine[1455]: Nov 12 20:46:33.946365 update_engine[1455]: Nov 12 20:46:33.946365 update_engine[1455]: Nov 12 20:46:33.946365 update_engine[1455]: Nov 12 20:46:33.946365 update_engine[1455]: Nov 12 20:46:33.946365 update_engine[1455]: Nov 12 20:46:33.946599 update_engine[1455]: I20241112 20:46:33.946369 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:46:33.952444 update_engine[1455]: I20241112 20:46:33.952418 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:46:33.952789 update_engine[1455]: I20241112 20:46:33.952725 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:46:33.953846 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 12 20:46:33.964810 update_engine[1455]: E20241112 20:46:33.964776 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:46:33.964872 update_engine[1455]: I20241112 20:46:33.964854 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 12 20:46:36.527413 systemd[1]: Started sshd@28-10.0.0.49:22-10.0.0.1:43308.service - OpenSSH per-connection server daemon (10.0.0.1:43308). Nov 12 20:46:36.567925 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 43308 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:36.569904 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:36.574471 systemd-logind[1451]: New session 29 of user core. Nov 12 20:46:36.579853 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 20:46:36.696559 sshd[5589]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:36.700815 systemd[1]: sshd@28-10.0.0.49:22-10.0.0.1:43308.service: Deactivated successfully. Nov 12 20:46:36.703592 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 20:46:36.704390 systemd-logind[1451]: Session 29 logged out. Waiting for processes to exit. Nov 12 20:46:36.705376 systemd-logind[1451]: Removed session 29. Nov 12 20:46:41.708992 systemd[1]: Started sshd@29-10.0.0.49:22-10.0.0.1:43310.service - OpenSSH per-connection server daemon (10.0.0.1:43310). Nov 12 20:46:41.770807 sshd[5628]: Accepted publickey for core from 10.0.0.1 port 43310 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:41.772714 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:41.777387 systemd-logind[1451]: New session 30 of user core. Nov 12 20:46:41.783748 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 12 20:46:41.943185 sshd[5628]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:41.948063 systemd[1]: sshd@29-10.0.0.49:22-10.0.0.1:43310.service: Deactivated successfully. Nov 12 20:46:41.950348 systemd[1]: session-30.scope: Deactivated successfully. Nov 12 20:46:41.951091 systemd-logind[1451]: Session 30 logged out. Waiting for processes to exit. Nov 12 20:46:41.952164 systemd-logind[1451]: Removed session 30. Nov 12 20:46:43.360144 containerd[1478]: time="2024-11-12T20:46:43.360102073Z" level=info msg="StopPodSandbox for \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\"" Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.395 [WARNING][5659] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ae276fed-5dad-46fe-8b4a-5f71fa73249a", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379", Pod:"coredns-7db6d8ff4d-s7cws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9de66a15435", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.395 [INFO][5659] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.395 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" iface="eth0" netns="" Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.395 [INFO][5659] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.395 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.415 [INFO][5666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" HandleID="k8s-pod-network.5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.415 [INFO][5666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.415 [INFO][5666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.420 [WARNING][5666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" HandleID="k8s-pod-network.5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.420 [INFO][5666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" HandleID="k8s-pod-network.5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.422 [INFO][5666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:43.427462 containerd[1478]: 2024-11-12 20:46:43.424 [INFO][5659] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:46:43.428197 containerd[1478]: time="2024-11-12T20:46:43.427518545Z" level=info msg="TearDown network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\" successfully" Nov 12 20:46:43.428197 containerd[1478]: time="2024-11-12T20:46:43.427548533Z" level=info msg="StopPodSandbox for \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\" returns successfully" Nov 12 20:46:43.436334 containerd[1478]: time="2024-11-12T20:46:43.436288815Z" level=info msg="RemovePodSandbox for \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\"" Nov 12 20:46:43.438932 containerd[1478]: time="2024-11-12T20:46:43.438904023Z" level=info msg="Forcibly stopping sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\"" Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.475 [WARNING][5688] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ae276fed-5dad-46fe-8b4a-5f71fa73249a", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66898d4cba2957eb44d9a6e94ede09bfd1b407fb1bfc549195d20954410e1379", Pod:"coredns-7db6d8ff4d-s7cws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9de66a15435", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.475 [INFO][5688] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.475 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" iface="eth0" netns="" Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.475 [INFO][5688] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.475 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.496 [INFO][5695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" HandleID="k8s-pod-network.5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.497 [INFO][5695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.497 [INFO][5695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.502 [WARNING][5695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" HandleID="k8s-pod-network.5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.502 [INFO][5695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" HandleID="k8s-pod-network.5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Workload="localhost-k8s-coredns--7db6d8ff4d--s7cws-eth0" Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.503 [INFO][5695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:43.508054 containerd[1478]: 2024-11-12 20:46:43.505 [INFO][5688] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2" Nov 12 20:46:43.508589 containerd[1478]: time="2024-11-12T20:46:43.508109554Z" level=info msg="TearDown network for sandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\" successfully" Nov 12 20:46:43.604514 containerd[1478]: time="2024-11-12T20:46:43.604446933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:46:43.619091 containerd[1478]: time="2024-11-12T20:46:43.604542169Z" level=info msg="RemovePodSandbox \"5ca47744938b40b3ff579a904a87cd77c0b9700af65172caf90bc4657a879bd2\" returns successfully" Nov 12 20:46:43.619091 containerd[1478]: time="2024-11-12T20:46:43.605120977Z" level=info msg="StopPodSandbox for \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\"" Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.639 [WARNING][5719] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0", GenerateName:"calico-kube-controllers-7c846d996f-", Namespace:"calico-system", SelfLink:"", UID:"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c846d996f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277", Pod:"calico-kube-controllers-7c846d996f-fnghq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b94d7a1acd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.639 [INFO][5719] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.639 [INFO][5719] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" iface="eth0" netns="" Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.639 [INFO][5719] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.639 [INFO][5719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.678 [INFO][5727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" HandleID="k8s-pod-network.1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.678 [INFO][5727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.678 [INFO][5727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.706 [WARNING][5727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" HandleID="k8s-pod-network.1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.706 [INFO][5727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" HandleID="k8s-pod-network.1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.707 [INFO][5727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:43.712210 containerd[1478]: 2024-11-12 20:46:43.709 [INFO][5719] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:46:43.712680 containerd[1478]: time="2024-11-12T20:46:43.712249215Z" level=info msg="TearDown network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\" successfully" Nov 12 20:46:43.712680 containerd[1478]: time="2024-11-12T20:46:43.712281147Z" level=info msg="StopPodSandbox for \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\" returns successfully" Nov 12 20:46:43.712806 containerd[1478]: time="2024-11-12T20:46:43.712782385Z" level=info msg="RemovePodSandbox for \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\"" Nov 12 20:46:43.712866 containerd[1478]: time="2024-11-12T20:46:43.712821221Z" level=info msg="Forcibly stopping sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\"" Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.746 [WARNING][5749] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0", GenerateName:"calico-kube-controllers-7c846d996f-", Namespace:"calico-system", SelfLink:"", UID:"f7c966cf-6ba7-4ec6-b4dd-6239e2ce2d50", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c846d996f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9787cb9c151749bae35962d3bb97171fc18f43106d357674ddb600c43432e277", Pod:"calico-kube-controllers-7c846d996f-fnghq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b94d7a1acd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.747 [INFO][5749] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.747 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" iface="eth0" netns="" Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.747 [INFO][5749] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.747 [INFO][5749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.770 [INFO][5756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" HandleID="k8s-pod-network.1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.770 [INFO][5756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.770 [INFO][5756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.776 [WARNING][5756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" HandleID="k8s-pod-network.1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.776 [INFO][5756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" HandleID="k8s-pod-network.1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Workload="localhost-k8s-calico--kube--controllers--7c846d996f--fnghq-eth0" Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.778 [INFO][5756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:43.782927 containerd[1478]: 2024-11-12 20:46:43.780 [INFO][5749] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a" Nov 12 20:46:43.783420 containerd[1478]: time="2024-11-12T20:46:43.782984769Z" level=info msg="TearDown network for sandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\" successfully" Nov 12 20:46:43.787146 containerd[1478]: time="2024-11-12T20:46:43.787115192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:46:43.787231 containerd[1478]: time="2024-11-12T20:46:43.787167423Z" level=info msg="RemovePodSandbox \"1cdfce2e6a50e2c897d497449ed9e8f2a4db0abe6eb2c90556055a0ea693ea4a\" returns successfully" Nov 12 20:46:43.787714 containerd[1478]: time="2024-11-12T20:46:43.787685143Z" level=info msg="StopPodSandbox for \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\"" Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.823 [WARNING][5779] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qwkzp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb4c4a07-9a98-43af-84e7-91573664a62a", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76fd3e38257c1e2e00b878bf3021c9fba976841c39f093c8838b68246eeddeec", Pod:"csi-node-driver-qwkzp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9b6ffa96df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.823 [INFO][5779] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.823 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" iface="eth0" netns="" Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.823 [INFO][5779] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.823 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.861 [INFO][5787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" HandleID="k8s-pod-network.045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Workload="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.861 [INFO][5787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.861 [INFO][5787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.867 [WARNING][5787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" HandleID="k8s-pod-network.045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Workload="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.867 [INFO][5787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" HandleID="k8s-pod-network.045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Workload="localhost-k8s-csi--node--driver--qwkzp-eth0" Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.868 [INFO][5787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:46:43.873667 containerd[1478]: 2024-11-12 20:46:43.871 [INFO][5779] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5" Nov 12 20:46:43.874187 containerd[1478]: time="2024-11-12T20:46:43.873665049Z" level=info msg="TearDown network for sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\" successfully" Nov 12 20:46:43.874187 containerd[1478]: time="2024-11-12T20:46:43.873699395Z" level=info msg="StopPodSandbox for \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\" returns successfully" Nov 12 20:46:43.874541 containerd[1478]: time="2024-11-12T20:46:43.874472513Z" level=info msg="RemovePodSandbox for \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\"" Nov 12 20:46:43.874588 containerd[1478]: time="2024-11-12T20:46:43.874540997Z" level=info msg="Forcibly stopping sandbox \"045f7a79e479ef329366234ddb31ded11b6ce24b35dcff48da293ad417353aa5\""