Nov 12 20:54:13.984239 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:54:13.984260 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:54:13.984271 kernel: BIOS-provided physical RAM map: Nov 12 20:54:13.984278 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:54:13.984284 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 20:54:13.984290 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 20:54:13.984297 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 20:54:13.984304 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 20:54:13.984310 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 12 20:54:13.984316 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 12 20:54:13.984328 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 12 20:54:13.984335 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 12 20:54:13.984341 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 12 20:54:13.984347 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 12 20:54:13.984355 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 12 20:54:13.984364 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 20:54:13.984374 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 12 20:54:13.984380 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 12 20:54:13.984387 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 20:54:13.984394 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:54:13.984400 kernel: NX (Execute Disable) protection: active Nov 12 20:54:13.984407 kernel: APIC: Static calls initialized Nov 12 20:54:13.984414 kernel: efi: EFI v2.7 by EDK II Nov 12 20:54:13.984421 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Nov 12 20:54:13.984427 kernel: SMBIOS 2.8 present. Nov 12 20:54:13.984434 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 12 20:54:13.984441 kernel: Hypervisor detected: KVM Nov 12 20:54:13.984450 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:54:13.984457 kernel: kvm-clock: using sched offset of 5213758228 cycles Nov 12 20:54:13.984464 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:54:13.984471 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:54:13.984478 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:54:13.984486 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:54:13.984493 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 12 20:54:13.984500 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:54:13.984507 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:54:13.984516 kernel: Using GB pages for direct mapping Nov 12 20:54:13.984523 kernel: Secure boot disabled Nov 12 20:54:13.984530 kernel: ACPI: Early table checksum verification disabled Nov 12 20:54:13.984537 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 12 20:54:13.984549 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 12 20:54:13.984557 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:54:13.984564 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:54:13.984573 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 12 20:54:13.984581 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:54:13.984588 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:54:13.984595 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:54:13.984602 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:54:13.984609 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 12 20:54:13.984616 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 12 20:54:13.984626 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Nov 12 20:54:13.984633 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 12 20:54:13.984640 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 12 20:54:13.984647 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 12 20:54:13.984654 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 12 20:54:13.984661 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 12 20:54:13.984668 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 12 20:54:13.984677 kernel: No NUMA configuration found Nov 12 20:54:13.984685 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 12 20:54:13.984694 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 12 20:54:13.984701 kernel: Zone ranges: Nov 12 20:54:13.984709 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:54:13.984716 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 12 20:54:13.984723 kernel: Normal empty Nov 12 20:54:13.984730 kernel: Movable zone start for each node Nov 12 20:54:13.984737 kernel: Early memory node ranges Nov 12 20:54:13.984744 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:54:13.984752 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 12 20:54:13.984759 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 12 20:54:13.984768 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 12 20:54:13.984776 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 12 20:54:13.984783 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 12 20:54:13.984791 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 12 20:54:13.984802 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:54:13.984811 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:54:13.984820 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 12 20:54:13.984829 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:54:13.984838 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 12 20:54:13.984850 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 12 20:54:13.984859 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 12 20:54:13.984868 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:54:13.984877 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:54:13.984895 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:54:13.984904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:54:13.984913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:54:13.984922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:54:13.984931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:54:13.984943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:54:13.984952 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:54:13.984961 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:54:13.984970 kernel: TSC deadline timer available Nov 12 20:54:13.984979 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:54:13.984988 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:54:13.984997 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:54:13.985006 kernel: kvm-guest: setup PV sched yield Nov 12 20:54:13.985015 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:54:13.985025 kernel: Booting paravirtualized kernel on KVM Nov 12 20:54:13.985032 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:54:13.985040 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:54:13.985047 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:54:13.985054 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:54:13.985061 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:54:13.985068 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:54:13.985075 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:54:13.985086 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:54:13.985096 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:54:13.985103 kernel: random: crng init done Nov 12 20:54:13.985111 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:54:13.985118 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:54:13.985125 kernel: Fallback order for Node 0: 0 Nov 12 20:54:13.985132 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 12 20:54:13.985139 kernel: Policy zone: DMA32 Nov 12 20:54:13.985147 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:54:13.985156 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 171124K reserved, 0K cma-reserved) Nov 12 20:54:13.985164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:54:13.985171 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:54:13.985178 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:54:13.985185 kernel: Dynamic Preempt: voluntary Nov 12 20:54:13.985200 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:54:13.985233 kernel: rcu: RCU event tracing is enabled. Nov 12 20:54:13.985241 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:54:13.985249 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:54:13.985256 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:54:13.985264 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:54:13.985272 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:54:13.985282 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:54:13.985290 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:54:13.985297 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:54:13.985305 kernel: Console: colour dummy device 80x25 Nov 12 20:54:13.985315 kernel: printk: console [ttyS0] enabled Nov 12 20:54:13.985325 kernel: ACPI: Core revision 20230628 Nov 12 20:54:13.985332 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:54:13.985340 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:54:13.985347 kernel: x2apic enabled Nov 12 20:54:13.985355 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:54:13.985363 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:54:13.985370 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:54:13.985378 kernel: kvm-guest: setup PV IPIs Nov 12 20:54:13.985385 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:54:13.985395 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:54:13.985403 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:54:13.985410 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:54:13.985418 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:54:13.985425 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:54:13.985433 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:54:13.985440 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:54:13.985448 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:54:13.985456 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:54:13.985465 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:54:13.985473 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:54:13.985483 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:54:13.985490 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:54:13.985498 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:54:13.985506 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:54:13.985514 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:54:13.985521 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:54:13.985531 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:54:13.985539 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:54:13.985546 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:54:13.985554 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:54:13.985561 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:54:13.985569 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:54:13.985576 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:54:13.985584 kernel: landlock: Up and running. Nov 12 20:54:13.985591 kernel: SELinux: Initializing. Nov 12 20:54:13.985601 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:54:13.985609 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:54:13.985616 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:54:13.985624 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:54:13.985632 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:54:13.985639 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:54:13.985647 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:54:13.985654 kernel: ... version: 0 Nov 12 20:54:13.985664 kernel: ... bit width: 48 Nov 12 20:54:13.985672 kernel: ... generic registers: 6 Nov 12 20:54:13.985679 kernel: ... value mask: 0000ffffffffffff Nov 12 20:54:13.985687 kernel: ... max period: 00007fffffffffff Nov 12 20:54:13.985694 kernel: ... fixed-purpose events: 0 Nov 12 20:54:13.985702 kernel: ... event mask: 000000000000003f Nov 12 20:54:13.985709 kernel: signal: max sigframe size: 1776 Nov 12 20:54:13.985717 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:54:13.985725 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:54:13.985734 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:54:13.985747 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:54:13.985756 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:54:13.985766 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:54:13.985775 kernel: smpboot: Max logical packages: 1 Nov 12 20:54:13.985785 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:54:13.985794 kernel: devtmpfs: initialized Nov 12 20:54:13.985803 kernel: x86/mm: Memory block size: 128MB Nov 12 20:54:13.985813 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 12 20:54:13.985823 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 12 20:54:13.985835 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 12 20:54:13.985844 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 12 20:54:13.985854 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 12 20:54:13.985863 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:54:13.985871 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:54:13.985885 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:54:13.985893 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:54:13.985901 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:54:13.985908 kernel: audit: type=2000 audit(1731444853.316:1): state=initialized audit_enabled=0 res=1 Nov 12 20:54:13.985918 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:54:13.985926 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:54:13.985933 kernel: cpuidle: using governor menu Nov 12 20:54:13.985941 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:54:13.985948 kernel: dca service started, version 1.12.1 Nov 12 20:54:13.985956 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:54:13.985963 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:54:13.985971 kernel: PCI: Using configuration type 1 for base access Nov 12 20:54:13.985978 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:54:13.985991 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:54:13.986000 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:54:13.986010 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:54:13.986020 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:54:13.986029 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:54:13.986037 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:54:13.986044 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:54:13.986052 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:54:13.986059 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:54:13.986081 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:54:13.986089 kernel: ACPI: Interpreter enabled Nov 12 20:54:13.986097 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:54:13.986112 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:54:13.986122 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:54:13.986137 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:54:13.986154 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:54:13.986178 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:54:13.986437 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:54:13.986576 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:54:13.986703 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:54:13.986713 kernel: PCI host bridge to bus 0000:00 Nov 12 20:54:13.986859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:54:13.986987 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:54:13.987102 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:54:13.987236 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:54:13.987353 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:54:13.987499 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 12 20:54:13.987658 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:54:13.987897 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:54:13.988056 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:54:13.988191 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 12 20:54:13.988336 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 12 20:54:13.988462 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 12 20:54:13.988587 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 12 20:54:13.988714 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:54:13.988856 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:54:13.989002 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 12 20:54:13.989247 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 12 20:54:13.989394 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 12 20:54:13.989540 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:54:13.989669 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 12 20:54:13.989795 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 12 20:54:13.989931 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 12 20:54:13.990129 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:54:13.990307 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 12 20:54:13.990437 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 12 20:54:13.990563 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 12 20:54:13.990688 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 12 20:54:13.990830 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:54:13.990965 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:54:13.991120 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:54:13.991275 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 12 20:54:13.991403 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 12 20:54:13.991547 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:54:13.991673 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 12 20:54:13.991684 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:54:13.991692 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:54:13.991700 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:54:13.991712 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:54:13.991719 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:54:13.991727 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:54:13.991735 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:54:13.991743 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:54:13.991750 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:54:13.991758 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:54:13.991765 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:54:13.991773 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:54:13.991783 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:54:13.991791 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:54:13.991799 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:54:13.991806 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:54:13.991814 kernel: iommu: Default domain type: Translated Nov 12 20:54:13.991822 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:54:13.991830 kernel: efivars: Registered efivars operations Nov 12 20:54:13.991837 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:54:13.991845 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:54:13.991855 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 12 20:54:13.991863 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 12 20:54:13.991870 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 12 20:54:13.991884 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 12 20:54:13.992011 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:54:13.992136 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:54:13.992275 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:54:13.992286 kernel: vgaarb: loaded Nov 12 20:54:13.992298 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:54:13.992306 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:54:13.992314 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:54:13.992322 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:54:13.992329 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:54:13.992337 kernel: pnp: PnP ACPI init Nov 12 20:54:13.992495 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:54:13.992508 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:54:13.992516 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:54:13.992527 kernel: NET: Registered PF_INET protocol family Nov 12 20:54:13.992535 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:54:13.992543 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:54:13.992551 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:54:13.992559 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:54:13.992567 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:54:13.992575 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:54:13.992582 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:54:13.992592 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:54:13.992600 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:54:13.992608 kernel: NET: Registered PF_XDP protocol family Nov 12 20:54:13.992737 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 12 20:54:13.992867 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 12 20:54:13.992992 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:54:13.993107 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:54:13.993243 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:54:13.993367 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:54:13.993482 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:54:13.993597 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 12 20:54:13.993607 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:54:13.993615 kernel: Initialise system trusted keyrings Nov 12 20:54:13.993622 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:54:13.993630 kernel: Key type asymmetric registered Nov 12 20:54:13.993638 kernel: Asymmetric key parser 'x509' registered Nov 12 20:54:13.993645 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:54:13.993656 kernel: io scheduler mq-deadline registered Nov 12 20:54:13.993664 kernel: io scheduler kyber registered Nov 12 20:54:13.993671 kernel: io scheduler bfq registered Nov 12 20:54:13.993679 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:54:13.993687 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:54:13.993695 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:54:13.993702 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:54:13.993710 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:54:13.993718 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:54:13.993728 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:54:13.993736 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:54:13.993744 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:54:13.993889 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:54:13.993902 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:54:13.994021 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:54:13.994140 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:54:13 UTC (1731444853) Nov 12 20:54:13.994312 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:54:13.994329 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:54:13.994337 kernel: efifb: probing for efifb Nov 12 20:54:13.994345 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 12 20:54:13.994352 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 12 20:54:13.994360 kernel: efifb: scrolling: redraw Nov 12 20:54:13.994368 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 12 20:54:13.994375 kernel: Console: switching to colour frame buffer device 100x37 Nov 12 20:54:13.994401 kernel: fb0: EFI VGA frame buffer device Nov 12 20:54:13.994411 kernel: pstore: Using crash dump compression: deflate Nov 12 20:54:13.994421 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:54:13.994429 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:54:13.994437 kernel: Segment Routing with IPv6 Nov 12 20:54:13.994445 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:54:13.994453 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:54:13.994461 kernel: Key type dns_resolver registered Nov 12 20:54:13.994468 kernel: IPI shorthand broadcast: enabled Nov 12 20:54:13.994476 kernel: sched_clock: Marking stable (757003472, 117976685)->(965702897, -90722740) Nov 12 20:54:13.994484 kernel: registered taskstats version 1 Nov 12 20:54:13.994494 kernel: Loading compiled-in X.509 certificates Nov 12 20:54:13.994503 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:54:13.994510 kernel: Key type .fscrypt registered Nov 12 20:54:13.994518 kernel: Key type fscrypt-provisioning registered Nov 12 20:54:13.994526 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:54:13.994534 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:54:13.994542 kernel: ima: No architecture policies found Nov 12 20:54:13.994550 kernel: clk: Disabling unused clocks Nov 12 20:54:13.994560 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:54:13.994570 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:54:13.994578 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:54:13.994587 kernel: Run /init as init process Nov 12 20:54:13.994594 kernel: with arguments: Nov 12 20:54:13.994602 kernel: /init Nov 12 20:54:13.994610 kernel: with environment: Nov 12 20:54:13.994618 kernel: HOME=/ Nov 12 20:54:13.994625 kernel: TERM=linux Nov 12 20:54:13.994633 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:54:13.994645 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:54:13.994655 systemd[1]: Detected virtualization kvm. Nov 12 20:54:13.994664 systemd[1]: Detected architecture x86-64. Nov 12 20:54:13.994673 systemd[1]: Running in initrd. Nov 12 20:54:13.994685 systemd[1]: No hostname configured, using default hostname. Nov 12 20:54:13.994693 systemd[1]: Hostname set to . Nov 12 20:54:13.994702 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:54:13.994710 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:54:13.994719 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:54:13.994728 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:54:13.994737 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:54:13.994746 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:54:13.994757 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:54:13.994766 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:54:13.994776 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:54:13.994786 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:54:13.994796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:54:13.994806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:54:13.994817 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:54:13.994826 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:54:13.994834 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:54:13.994843 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:54:13.994851 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:54:13.994860 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:54:13.994868 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:54:13.994877 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:54:13.994894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:54:13.994905 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:54:13.994913 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:54:13.994922 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:54:13.994930 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:54:13.994938 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:54:13.994948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:54:13.994966 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:54:13.994980 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:54:13.994992 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:54:13.995006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:13.995014 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:54:13.995023 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:54:13.995031 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:54:13.995063 systemd-journald[193]: Collecting audit messages is disabled. Nov 12 20:54:13.995086 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:54:13.995095 systemd-journald[193]: Journal started Nov 12 20:54:13.995115 systemd-journald[193]: Runtime Journal (/run/log/journal/da01bcd7afdc488badb1eda9cd58f9ab) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:54:13.987954 systemd-modules-load[194]: Inserted module 'overlay' Nov 12 20:54:13.997395 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:54:13.998706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:13.999199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:54:14.017550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:54:14.020241 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:54:14.020590 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:54:14.044555 kernel: Bridge firewalling registered Nov 12 20:54:14.023094 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:54:14.044021 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 12 20:54:14.047654 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:54:14.056541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:54:14.057787 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:54:14.059737 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:54:14.097356 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:14.103495 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:54:14.118057 dracut-cmdline[225]: dracut-dracut-053 Nov 12 20:54:14.171620 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:54:14.181127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:14.249242 kernel: SCSI subsystem initialized Nov 12 20:54:14.258378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:54:14.261289 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:54:14.289894 systemd-resolved[303]: Positive Trust Anchors: Nov 12 20:54:14.347120 kernel: iscsi: registered transport (tcp) Nov 12 20:54:14.289917 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:54:14.289949 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:54:14.293087 systemd-resolved[303]: Defaulting to hostname 'linux'. Nov 12 20:54:14.294296 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:54:14.347198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:54:14.459759 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:54:14.459783 kernel: QLogic iSCSI HBA Driver Nov 12 20:54:14.509750 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:54:14.542361 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:54:14.569254 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:54:14.569318 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:54:14.571247 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:54:14.625244 kernel: raid6: avx2x4 gen() 30246 MB/s Nov 12 20:54:14.659247 kernel: raid6: avx2x2 gen() 30660 MB/s Nov 12 20:54:14.676440 kernel: raid6: avx2x1 gen() 25825 MB/s Nov 12 20:54:14.676462 kernel: raid6: using algorithm avx2x2 gen() 30660 MB/s Nov 12 20:54:14.727836 kernel: raid6: .... xor() 19700 MB/s, rmw enabled Nov 12 20:54:14.728007 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:54:14.810249 kernel: xor: automatically using best checksumming function avx Nov 12 20:54:15.004273 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:54:15.018829 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:54:15.026351 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:54:15.039919 systemd-udevd[414]: Using default interface naming scheme 'v255'. Nov 12 20:54:15.044629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:54:15.055389 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:54:15.069613 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Nov 12 20:54:15.107316 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:54:15.118409 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:54:15.185862 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:54:15.194381 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:54:15.213368 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:54:15.229911 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:54:15.231695 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:54:15.232516 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:54:15.242484 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:54:15.285434 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:54:15.339339 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:54:15.339373 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:54:15.339696 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:54:15.339721 kernel: GPT:9289727 != 19775487 Nov 12 20:54:15.339742 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:54:15.339759 kernel: GPT:9289727 != 19775487 Nov 12 20:54:15.339779 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:54:15.339807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:54:15.285259 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:54:15.311538 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:54:15.311654 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:15.342186 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:54:15.344154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:15.344330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:15.344793 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:15.356487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:15.362567 kernel: libata version 3.00 loaded. Nov 12 20:54:15.366625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:15.376677 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:54:15.376702 kernel: AES CTR mode by8 optimization enabled Nov 12 20:54:15.376713 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:54:15.429434 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:54:15.429454 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:54:15.429630 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:54:15.429775 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Nov 12 20:54:15.429795 kernel: scsi host0: ahci Nov 12 20:54:15.429971 kernel: scsi host1: ahci Nov 12 20:54:15.430176 kernel: scsi host2: ahci Nov 12 20:54:15.430374 kernel: scsi host3: ahci Nov 12 20:54:15.430532 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (459) Nov 12 20:54:15.430543 kernel: scsi host4: ahci Nov 12 20:54:15.430701 kernel: scsi host5: ahci Nov 12 20:54:15.430868 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 12 20:54:15.430880 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 12 20:54:15.430890 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 12 20:54:15.430900 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 12 20:54:15.430911 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 12 20:54:15.430921 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 12 20:54:15.366759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:15.383422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:15.403195 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:54:15.427573 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:54:15.439673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:15.447231 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:54:15.452677 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:54:15.453106 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:54:15.472900 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:54:15.474699 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:54:15.486452 disk-uuid[570]: Primary Header is updated. Nov 12 20:54:15.486452 disk-uuid[570]: Secondary Entries is updated. Nov 12 20:54:15.486452 disk-uuid[570]: Secondary Header is updated. Nov 12 20:54:15.491242 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:54:15.495247 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:54:15.496287 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:15.740794 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:54:15.740892 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:54:15.740924 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:54:15.742236 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:54:15.743237 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:54:15.743251 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:54:15.744534 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:54:15.744556 kernel: ata3.00: applying bridge limits Nov 12 20:54:15.745544 kernel: ata3.00: configured for UDMA/100 Nov 12 20:54:15.746252 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:54:15.784253 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:54:15.796908 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:54:15.796929 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:54:16.497244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:54:16.497474 disk-uuid[574]: The operation has completed successfully. Nov 12 20:54:16.525193 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:54:16.525353 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:54:16.555366 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:54:16.559412 sh[594]: Success Nov 12 20:54:16.572267 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:54:16.604871 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:54:16.615754 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:54:16.618865 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:54:16.631241 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:54:16.631278 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:16.631289 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:54:16.631300 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:54:16.632576 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:54:16.636273 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:54:16.637832 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:54:16.651375 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:54:16.653246 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:54:16.663868 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:16.663894 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:16.663905 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:54:16.667248 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:54:16.677045 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:54:16.680244 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:16.688927 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:54:16.699385 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:54:16.765258 ignition[687]: Ignition 2.19.0 Nov 12 20:54:16.765919 ignition[687]: Stage: fetch-offline Nov 12 20:54:16.765970 ignition[687]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:16.765984 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:54:16.766117 ignition[687]: parsed url from cmdline: "" Nov 12 20:54:16.766122 ignition[687]: no config URL provided Nov 12 20:54:16.766130 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:54:16.766144 ignition[687]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:54:16.766180 ignition[687]: op(1): [started] loading QEMU firmware config module Nov 12 20:54:16.766187 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:54:16.774099 ignition[687]: op(1): [finished] loading QEMU firmware config module Nov 12 20:54:16.789389 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:54:16.796383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:54:16.819041 ignition[687]: parsing config with SHA512: 561f8dad1872aa9507ac6b2edecbbf550e1ab0f6d1aa570282d5acc9252fe3a446e874f7e7e83997ce50053fcd3922e1908de03ea59ec99c969fe9ef7fa6b3f8 Nov 12 20:54:16.819620 systemd-networkd[781]: lo: Link UP Nov 12 20:54:16.819630 systemd-networkd[781]: lo: Gained carrier Nov 12 20:54:16.821723 systemd-networkd[781]: Enumeration completed Nov 12 20:54:16.822228 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:54:16.822273 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:16.822278 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:54:16.823313 systemd[1]: Reached target network.target - Network. Nov 12 20:54:16.823564 systemd-networkd[781]: eth0: Link UP Nov 12 20:54:16.823568 systemd-networkd[781]: eth0: Gained carrier Nov 12 20:54:16.823576 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:16.837735 unknown[687]: fetched base config from "system" Nov 12 20:54:16.837748 unknown[687]: fetched user config from "qemu" Nov 12 20:54:16.838123 ignition[687]: fetch-offline: fetch-offline passed Nov 12 20:54:16.838185 ignition[687]: Ignition finished successfully Nov 12 20:54:16.840630 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:54:16.842262 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:54:16.842715 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:54:16.847496 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:54:16.862758 ignition[786]: Ignition 2.19.0 Nov 12 20:54:16.862770 ignition[786]: Stage: kargs Nov 12 20:54:16.862952 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:16.862964 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:54:16.863758 ignition[786]: kargs: kargs passed Nov 12 20:54:16.863805 ignition[786]: Ignition finished successfully Nov 12 20:54:16.867179 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:54:16.879396 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:54:16.894203 ignition[794]: Ignition 2.19.0 Nov 12 20:54:16.894234 ignition[794]: Stage: disks Nov 12 20:54:16.894428 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:16.894440 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:54:16.895421 ignition[794]: disks: disks passed Nov 12 20:54:16.897875 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:54:16.895484 ignition[794]: Ignition finished successfully Nov 12 20:54:16.898819 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:54:16.899247 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:54:16.899585 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:54:16.899956 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:54:16.900465 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:54:16.908391 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:54:16.920833 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:54:16.927978 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:54:16.937405 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:54:17.120272 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:54:17.121058 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:54:17.123364 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:54:17.135332 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:54:17.138239 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:54:17.140323 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:54:17.140370 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:54:17.140392 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:54:17.150357 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Nov 12 20:54:17.150383 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:17.150401 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:17.150411 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:54:17.150684 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:54:17.153135 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:54:17.175399 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:54:17.179522 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:54:17.209779 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:54:17.215486 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:54:17.220831 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:54:17.225109 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:54:17.324595 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:54:17.341371 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:54:17.345927 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:54:17.351240 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:17.375025 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:54:17.379746 ignition[927]: INFO : Ignition 2.19.0 Nov 12 20:54:17.379746 ignition[927]: INFO : Stage: mount Nov 12 20:54:17.381853 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:17.381853 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:54:17.385485 ignition[927]: INFO : mount: mount passed Nov 12 20:54:17.386412 ignition[927]: INFO : Ignition finished successfully Nov 12 20:54:17.389982 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:54:17.406414 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:54:17.629751 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:54:17.641415 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:54:17.655785 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Nov 12 20:54:17.655833 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:17.655863 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:17.656736 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:54:17.660228 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:54:17.662519 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:54:17.691972 ignition[957]: INFO : Ignition 2.19.0 Nov 12 20:54:17.691972 ignition[957]: INFO : Stage: files Nov 12 20:54:17.706472 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:17.706472 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:54:17.706472 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:54:17.710959 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:54:17.710959 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:54:17.715289 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:54:17.717032 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:54:17.718748 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:54:17.717659 unknown[957]: wrote ssh authorized keys file for user: core Nov 12 20:54:17.749023 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:54:17.749023 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:54:17.774653 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:54:17.884382 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:54:17.884382 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:54:17.888454 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Nov 12 20:54:18.280861 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:54:18.621465 systemd-networkd[781]: eth0: Gained IPv6LL Nov 12 20:54:18.925631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:54:18.925631 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:54:18.929684 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:54:18.933188 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:54:18.933188 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:54:18.933188 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 12 20:54:18.938558 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:54:18.940960 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:54:18.940960 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 12 20:54:18.944988 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:54:18.985975 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:54:18.991196 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:54:18.993105 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:54:18.993105 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:54:18.993105 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:54:18.993105 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:54:18.993105 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:54:18.993105 ignition[957]: INFO : files: files passed Nov 12 20:54:18.993105 ignition[957]: INFO : Ignition finished successfully Nov 12 20:54:18.994623 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:54:19.007391 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:54:19.009486 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:54:19.011929 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:54:19.012090 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:54:19.020693 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:54:19.024187 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:54:19.024187 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:54:19.028568 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:54:19.029988 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:54:19.036339 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:54:19.054511 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:54:19.082653 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:54:19.082804 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:54:19.083725 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:54:19.087078 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:54:19.090416 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:54:19.099433 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:54:19.115051 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:54:19.117078 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:54:19.134274 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:54:19.136673 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:54:19.137319 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:54:19.137778 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:54:19.137963 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:54:19.142845 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:54:19.143264 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:54:19.143814 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:54:19.144152 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:54:19.144702 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:54:19.145107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:54:19.145867 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:54:19.146264 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:54:19.146789 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:54:19.147117 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:54:19.147607 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:54:19.147793 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:54:19.167082 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:54:19.167675 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:54:19.168046 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:54:19.171553 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:54:19.172080 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:54:19.172275 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:54:19.178196 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:54:19.178408 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:54:19.178949 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:54:19.181924 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:54:19.186288 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:54:19.189121 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:54:19.189712 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:54:19.190054 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:54:19.190196 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:54:19.193122 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:54:19.193288 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:54:19.194904 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:54:19.195081 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:54:19.196795 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:54:19.196955 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:54:19.211471 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:54:19.212953 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:54:19.214132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:54:19.214418 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:54:19.214859 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:54:19.214997 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:54:19.222319 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:54:19.222473 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:54:19.236959 ignition[1013]: INFO : Ignition 2.19.0 Nov 12 20:54:19.236959 ignition[1013]: INFO : Stage: umount Nov 12 20:54:19.239011 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:19.239011 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:54:19.239011 ignition[1013]: INFO : umount: umount passed Nov 12 20:54:19.239011 ignition[1013]: INFO : Ignition finished successfully Nov 12 20:54:19.241413 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:54:19.241653 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:54:19.243788 systemd[1]: Stopped target network.target - Network. Nov 12 20:54:19.245789 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:54:19.245854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:54:19.248308 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:54:19.248365 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:54:19.250901 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:54:19.250952 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:54:19.253127 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:54:19.253191 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:54:19.255611 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:54:19.257895 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:54:19.261577 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:54:19.262269 systemd-networkd[781]: eth0: DHCPv6 lease lost Nov 12 20:54:19.265489 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:54:19.265673 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:54:19.268088 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:54:19.268142 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:54:19.284450 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:54:19.284718 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:54:19.284803 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:54:19.285324 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:54:19.286140 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:54:19.286319 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:54:19.294847 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:54:19.294922 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:19.295839 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:54:19.295889 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:54:19.296176 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:54:19.296234 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:54:19.307797 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:54:19.308007 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:54:19.316105 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:54:19.316197 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:54:19.316885 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:54:19.316924 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:54:19.317173 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:54:19.317246 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:54:19.322784 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:54:19.322846 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:54:19.325552 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:54:19.325607 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:19.329381 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:54:19.350800 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:54:19.350866 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:54:19.352141 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:54:19.352190 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:54:19.353488 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:54:19.353545 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:54:19.353964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:19.354012 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:19.356289 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:54:19.356409 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:54:19.358840 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:54:19.358958 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:54:19.451179 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:54:19.452516 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:54:19.455609 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:54:19.457847 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:54:19.458943 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:54:19.477665 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:54:19.486881 systemd[1]: Switching root. Nov 12 20:54:19.520038 systemd-journald[193]: Journal stopped Nov 12 20:54:20.900755 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 12 20:54:20.900843 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:54:20.900862 kernel: SELinux: policy capability open_perms=1 Nov 12 20:54:20.900876 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:54:20.900910 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:54:20.900922 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:54:20.900939 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:54:20.900950 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:54:20.900962 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:54:20.900973 kernel: audit: type=1403 audit(1731444860.038:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:54:20.900986 systemd[1]: Successfully loaded SELinux policy in 43.098ms. Nov 12 20:54:20.901006 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.508ms. Nov 12 20:54:20.901020 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:54:20.901043 systemd[1]: Detected virtualization kvm. Nov 12 20:54:20.901058 systemd[1]: Detected architecture x86-64. Nov 12 20:54:20.901070 systemd[1]: Detected first boot. Nov 12 20:54:20.901088 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:54:20.901100 zram_generator::config[1056]: No configuration found. Nov 12 20:54:20.901113 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:54:20.901125 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:54:20.901137 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:54:20.901155 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:54:20.901168 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:54:20.901180 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:54:20.901197 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:54:20.901209 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:54:20.901235 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:54:20.901247 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:54:20.901259 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:54:20.901272 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:54:20.901290 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:54:20.901303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:54:20.901315 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:54:20.901327 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:54:20.901340 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:54:20.901353 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:54:20.901365 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:54:20.901378 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:54:20.901390 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:54:20.901407 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:54:20.901419 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:54:20.901431 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:54:20.901444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:54:20.901457 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:54:20.901469 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:54:20.901481 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:54:20.901493 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:54:20.901510 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:54:20.901522 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:54:20.901535 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:54:20.901547 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:54:20.901559 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:54:20.901572 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:54:20.901584 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:54:20.901596 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:54:20.901614 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:20.901627 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:54:20.901640 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:54:20.901652 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:54:20.901665 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:54:20.901676 systemd[1]: Reached target machines.target - Containers. Nov 12 20:54:20.901688 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:54:20.901701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:54:20.901721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:54:20.901739 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:54:20.901751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:54:20.901763 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:54:20.901775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:54:20.901788 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:54:20.901799 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:54:20.901812 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:54:20.901824 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:54:20.901841 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:54:20.901853 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:54:20.901865 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:54:20.901877 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:54:20.901889 kernel: loop: module loaded Nov 12 20:54:20.901902 kernel: fuse: init (API version 7.39) Nov 12 20:54:20.901914 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:54:20.901926 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:54:20.901938 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:54:20.901954 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:54:20.901966 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:54:20.901978 systemd[1]: Stopped verity-setup.service. Nov 12 20:54:20.901990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:20.902002 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:54:20.902014 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:54:20.902044 systemd-journald[1126]: Collecting audit messages is disabled. Nov 12 20:54:20.902066 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:54:20.902082 systemd-journald[1126]: Journal started Nov 12 20:54:20.902104 systemd-journald[1126]: Runtime Journal (/run/log/journal/da01bcd7afdc488badb1eda9cd58f9ab) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:54:20.902140 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:54:20.622836 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:54:20.654660 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:54:20.655191 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:54:20.905263 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:54:20.907026 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:54:20.908336 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:54:20.909653 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:54:20.911478 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:54:20.913407 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:54:20.913641 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:54:20.915457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:54:20.915696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:54:20.917274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:54:20.917462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:54:20.919071 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:54:20.919278 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:54:20.920774 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:54:20.920977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:54:20.922628 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:54:20.924207 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:54:20.926149 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:54:20.929743 kernel: ACPI: bus type drm_connector registered Nov 12 20:54:20.930826 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:54:20.931049 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:54:20.945515 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:54:20.954366 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:54:20.957394 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:54:20.958663 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:54:20.958701 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:54:20.960885 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:54:20.970569 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:54:20.974252 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:54:20.975634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:20.977572 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:54:20.982745 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:54:20.984876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:54:20.990333 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:54:20.990679 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:54:20.992539 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:54:20.999132 systemd-journald[1126]: Time spent on flushing to /var/log/journal/da01bcd7afdc488badb1eda9cd58f9ab is 24.889ms for 993 entries. Nov 12 20:54:20.999132 systemd-journald[1126]: System Journal (/var/log/journal/da01bcd7afdc488badb1eda9cd58f9ab) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:54:21.041055 systemd-journald[1126]: Received client request to flush runtime journal. Nov 12 20:54:21.041103 kernel: loop0: detected capacity change from 0 to 140768 Nov 12 20:54:20.995250 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:54:21.001390 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:54:21.005348 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:54:21.008420 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:54:21.012192 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:54:21.022795 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:54:21.024852 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:54:21.034657 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:54:21.040146 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:54:21.048684 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:54:21.051058 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:21.065677 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:54:21.074599 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Nov 12 20:54:21.074620 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Nov 12 20:54:21.076322 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:54:21.082861 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:54:21.086945 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:54:21.088047 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:54:21.092968 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:54:21.100532 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:54:21.102281 kernel: loop1: detected capacity change from 0 to 142488 Nov 12 20:54:21.135624 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:54:21.139300 kernel: loop2: detected capacity change from 0 to 205544 Nov 12 20:54:21.148598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:54:21.241468 kernel: loop3: detected capacity change from 0 to 140768 Nov 12 20:54:21.250968 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Nov 12 20:54:21.250995 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Nov 12 20:54:21.258547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:54:21.265246 kernel: loop4: detected capacity change from 0 to 142488 Nov 12 20:54:21.274276 kernel: loop5: detected capacity change from 0 to 205544 Nov 12 20:54:21.282641 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:54:21.283376 (sd-merge)[1197]: Merged extensions into '/usr'. Nov 12 20:54:21.287995 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:54:21.288017 systemd[1]: Reloading... Nov 12 20:54:21.399719 zram_generator::config[1224]: No configuration found. Nov 12 20:54:21.601428 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:54:21.601733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:21.661090 systemd[1]: Reloading finished in 372 ms. Nov 12 20:54:21.705422 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:54:21.707087 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:54:21.721503 systemd[1]: Starting ensure-sysext.service... Nov 12 20:54:21.745265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:54:21.773037 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:54:21.773060 systemd[1]: Reloading... Nov 12 20:54:21.813499 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:54:21.814016 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:54:21.815375 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:54:21.815824 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Nov 12 20:54:21.815938 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Nov 12 20:54:21.824570 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:54:21.824594 systemd-tmpfiles[1262]: Skipping /boot Nov 12 20:54:21.848253 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:54:21.848274 systemd-tmpfiles[1262]: Skipping /boot Nov 12 20:54:21.850272 zram_generator::config[1288]: No configuration found. Nov 12 20:54:21.987453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:22.046918 systemd[1]: Reloading finished in 273 ms. Nov 12 20:54:22.072313 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:54:22.089970 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:54:22.101259 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:54:22.106788 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:54:22.110508 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:54:22.116075 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:54:22.119964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:54:22.123348 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:54:22.129534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:22.129798 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:54:22.134557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:54:22.138821 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:54:22.143958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:54:22.145396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:22.145560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:22.155573 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:54:22.158344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:54:22.158632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:54:22.161797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:54:22.162076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:54:22.164863 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:54:22.165144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:54:22.171064 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:54:22.171973 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Nov 12 20:54:22.183873 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:22.184140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:54:22.194642 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:54:22.197775 augenrules[1357]: No rules Nov 12 20:54:22.199414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:54:22.205590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:54:22.207104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:22.207282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:22.208607 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:54:22.210804 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:54:22.212746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:54:22.212952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:54:22.215075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:54:22.215318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:54:22.217847 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:54:22.219958 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:54:22.220159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:54:22.235760 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:22.236008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:54:22.248850 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:54:22.253581 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:54:22.257499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:54:22.263499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:54:22.264954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:22.268747 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:54:22.272860 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:54:22.274109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:22.276116 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:54:22.278338 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:54:22.281948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:54:22.282846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:54:22.285506 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:54:22.285815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:54:22.289151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:54:22.293757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:54:22.299313 systemd[1]: Finished ensure-sysext.service. Nov 12 20:54:22.301278 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:54:22.301515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:54:22.307235 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1374) Nov 12 20:54:22.310240 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1374) Nov 12 20:54:22.318283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1391) Nov 12 20:54:22.338082 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:54:22.346713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:54:22.346828 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:54:22.354710 systemd-resolved[1332]: Positive Trust Anchors: Nov 12 20:54:22.354737 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:54:22.354779 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:54:22.363475 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:54:22.364910 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:54:22.374487 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:54:22.375846 systemd-resolved[1332]: Defaulting to hostname 'linux'. Nov 12 20:54:22.380891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:54:22.383396 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:54:22.432646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:54:22.448591 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:54:22.477650 systemd-networkd[1397]: lo: Link UP Nov 12 20:54:22.477681 systemd-networkd[1397]: lo: Gained carrier Nov 12 20:54:22.479949 systemd-networkd[1397]: Enumeration completed Nov 12 20:54:22.480139 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:54:22.481965 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:22.481977 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:54:22.482371 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:54:22.484292 systemd-networkd[1397]: eth0: Link UP Nov 12 20:54:22.484307 systemd-networkd[1397]: eth0: Gained carrier Nov 12 20:54:22.484326 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:22.485907 systemd[1]: Reached target network.target - Network. Nov 12 20:54:22.496565 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:54:22.497244 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:54:22.502261 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 20:54:22.506277 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:54:23.048600 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:54:23.048688 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:54:23.048739 systemd-timesyncd[1411]: Initial clock synchronization to Tue 2024-11-12 20:54:23.048551 UTC. Nov 12 20:54:23.048804 systemd-resolved[1332]: Clock change detected. Flushing caches. Nov 12 20:54:23.050704 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:54:23.064287 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 12 20:54:23.071412 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:54:23.071749 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:54:23.073116 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:54:23.093906 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 12 20:54:23.164355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:23.170186 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:23.172438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:23.186051 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:54:23.183488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:23.199417 kernel: kvm_amd: TSC scaling supported Nov 12 20:54:23.199478 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:54:23.199542 kernel: kvm_amd: Nested Paging enabled Nov 12 20:54:23.199564 kernel: kvm_amd: LBR virtualization supported Nov 12 20:54:23.200002 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:54:23.201094 kernel: kvm_amd: Virtual GIF supported Nov 12 20:54:23.224911 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:54:23.255533 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:54:23.261069 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:54:23.265768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:23.274050 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:54:23.308486 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:54:23.310195 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:54:23.311459 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:54:23.312850 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:54:23.314298 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:54:23.315833 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:54:23.317105 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:54:23.318384 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:54:23.319670 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:54:23.319701 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:54:23.320628 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:54:23.322212 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:54:23.325242 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:54:23.338272 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:54:23.341192 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:54:23.343024 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:54:23.344363 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:54:23.345483 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:54:23.346528 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:54:23.346563 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:54:23.347773 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:54:23.350191 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:54:23.355218 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:54:23.359777 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:54:23.361154 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:54:23.365030 jq[1441]: false Nov 12 20:54:23.365351 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:54:23.365181 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:54:23.371017 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:54:23.374071 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:54:23.377030 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:54:23.385536 extend-filesystems[1442]: Found loop3 Nov 12 20:54:23.385536 extend-filesystems[1442]: Found loop4 Nov 12 20:54:23.390208 extend-filesystems[1442]: Found loop5 Nov 12 20:54:23.390208 extend-filesystems[1442]: Found sr0 Nov 12 20:54:23.390208 extend-filesystems[1442]: Found vda Nov 12 20:54:23.390208 extend-filesystems[1442]: Found vda1 Nov 12 20:54:23.390208 extend-filesystems[1442]: Found vda2 Nov 12 20:54:23.390208 extend-filesystems[1442]: Found vda3 Nov 12 20:54:23.390208 extend-filesystems[1442]: Found usr Nov 12 20:54:23.390208 extend-filesystems[1442]: Found vda4 Nov 12 20:54:23.390208 extend-filesystems[1442]: Found vda6 Nov 12 20:54:23.390208 extend-filesystems[1442]: Found vda7 Nov 12 20:54:23.390208 extend-filesystems[1442]: Found vda9 Nov 12 20:54:23.390208 extend-filesystems[1442]: Checking size of /dev/vda9 Nov 12 20:54:23.389765 dbus-daemon[1440]: [system] SELinux support is enabled Nov 12 20:54:23.389088 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:54:23.400188 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:54:23.401882 extend-filesystems[1442]: Resized partition /dev/vda9 Nov 12 20:54:23.400772 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:54:23.402972 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:54:23.406038 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:54:23.405322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:54:23.408483 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:54:23.411152 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:54:23.417541 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Nov 12 20:54:23.419323 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:54:23.421786 jq[1462]: true Nov 12 20:54:23.422714 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:54:23.424024 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:54:23.424470 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:54:23.424696 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:54:23.427853 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:54:23.429160 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:54:23.439893 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:54:23.442763 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:54:23.468063 jq[1467]: true Nov 12 20:54:23.468259 update_engine[1460]: I20241112 20:54:23.459846 1460 main.cc:92] Flatcar Update Engine starting Nov 12 20:54:23.468259 update_engine[1460]: I20241112 20:54:23.461266 1460 update_check_scheduler.cc:74] Next update check in 5m34s Nov 12 20:54:23.472106 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:54:23.472106 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:54:23.472106 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:54:23.483446 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Nov 12 20:54:23.473688 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:54:23.484719 tar[1466]: linux-amd64/helm Nov 12 20:54:23.474035 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:54:23.488575 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:54:23.490266 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:54:23.490305 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:54:23.491663 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:54:23.491686 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:54:23.498292 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:54:23.523521 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:54:23.529886 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:54:23.532178 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:54:23.538734 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:54:23.538779 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:54:23.540000 systemd-logind[1450]: New seat seat0. Nov 12 20:54:23.542488 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:54:23.573578 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:54:23.586129 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:54:23.612819 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:54:23.622187 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:54:23.633126 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:54:23.633420 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:54:23.640112 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:54:23.655740 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:54:23.665352 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:54:23.668784 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:54:23.670450 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:54:23.675330 containerd[1468]: time="2024-11-12T20:54:23.675220506Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:54:23.702512 containerd[1468]: time="2024-11-12T20:54:23.702441864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:23.704797 containerd[1468]: time="2024-11-12T20:54:23.704733375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:23.704797 containerd[1468]: time="2024-11-12T20:54:23.704789941Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:54:23.704946 containerd[1468]: time="2024-11-12T20:54:23.704814367Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:54:23.705106 containerd[1468]: time="2024-11-12T20:54:23.705081919Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:54:23.705106 containerd[1468]: time="2024-11-12T20:54:23.705103500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:23.705202 containerd[1468]: time="2024-11-12T20:54:23.705178110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:23.705202 containerd[1468]: time="2024-11-12T20:54:23.705195763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:23.705468 containerd[1468]: time="2024-11-12T20:54:23.705441594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:23.705468 containerd[1468]: time="2024-11-12T20:54:23.705462734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:23.705534 containerd[1468]: time="2024-11-12T20:54:23.705476430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:23.705534 containerd[1468]: time="2024-11-12T20:54:23.705486849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:23.705599 containerd[1468]: time="2024-11-12T20:54:23.705586997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:23.705947 containerd[1468]: time="2024-11-12T20:54:23.705913460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:23.706091 containerd[1468]: time="2024-11-12T20:54:23.706056869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:23.706091 containerd[1468]: time="2024-11-12T20:54:23.706076185Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:54:23.706223 containerd[1468]: time="2024-11-12T20:54:23.706194838Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:54:23.706280 containerd[1468]: time="2024-11-12T20:54:23.706261082Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:54:23.713789 containerd[1468]: time="2024-11-12T20:54:23.713748184Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:54:23.713847 containerd[1468]: time="2024-11-12T20:54:23.713794621Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:54:23.713847 containerd[1468]: time="2024-11-12T20:54:23.713812084Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:54:23.713847 containerd[1468]: time="2024-11-12T20:54:23.713829386Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:54:23.713960 containerd[1468]: time="2024-11-12T20:54:23.713847260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:54:23.714096 containerd[1468]: time="2024-11-12T20:54:23.714031295Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714318304Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714525643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714543016Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714556381Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714570397Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714583031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714597217Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714612766Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714628676Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714641260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714653453Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714666417Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714698377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.714795 containerd[1468]: time="2024-11-12T20:54:23.714718946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714733313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714745826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714761385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714777055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714790159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714803875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714819424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714835374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714905336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714921776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714934741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714955410Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714976820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714989123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715183 containerd[1468]: time="2024-11-12T20:54:23.714999883Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:54:23.715462 containerd[1468]: time="2024-11-12T20:54:23.715053744Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:54:23.715462 containerd[1468]: time="2024-11-12T20:54:23.715070936Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:54:23.715462 containerd[1468]: time="2024-11-12T20:54:23.715082368Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:54:23.715462 containerd[1468]: time="2024-11-12T20:54:23.715095823Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:54:23.715462 containerd[1468]: time="2024-11-12T20:54:23.715111272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715462 containerd[1468]: time="2024-11-12T20:54:23.715125659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:54:23.715462 containerd[1468]: time="2024-11-12T20:54:23.715135688Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:54:23.715462 containerd[1468]: time="2024-11-12T20:54:23.715146047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:54:23.715612 containerd[1468]: time="2024-11-12T20:54:23.715461029Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:54:23.715612 containerd[1468]: time="2024-11-12T20:54:23.715545477Z" level=info msg="Connect containerd service" Nov 12 20:54:23.715612 containerd[1468]: time="2024-11-12T20:54:23.715582286Z" level=info msg="using legacy CRI server" Nov 12 20:54:23.715612 containerd[1468]: time="2024-11-12T20:54:23.715589700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:54:23.715822 containerd[1468]: time="2024-11-12T20:54:23.715684578Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:54:23.716478 containerd[1468]: time="2024-11-12T20:54:23.716424617Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:54:23.716653 containerd[1468]: time="2024-11-12T20:54:23.716592652Z" level=info msg="Start subscribing containerd event" Nov 12 20:54:23.716685 containerd[1468]: time="2024-11-12T20:54:23.716674887Z" level=info msg="Start recovering state" Nov 12 20:54:23.716787 containerd[1468]: time="2024-11-12T20:54:23.716763423Z" level=info msg="Start event monitor" Nov 12 20:54:23.716809 containerd[1468]: time="2024-11-12T20:54:23.716789802Z" level=info msg="Start snapshots syncer" Nov 12 20:54:23.716809 containerd[1468]: time="2024-11-12T20:54:23.716801144Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:54:23.716845 containerd[1468]: time="2024-11-12T20:54:23.716813567Z" level=info msg="Start streaming server" Nov 12 20:54:23.717149 containerd[1468]: time="2024-11-12T20:54:23.717125562Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:54:23.717216 containerd[1468]: time="2024-11-12T20:54:23.717198149Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:54:23.717292 containerd[1468]: time="2024-11-12T20:54:23.717275163Z" level=info msg="containerd successfully booted in 0.043268s" Nov 12 20:54:23.717442 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:54:24.047441 tar[1466]: linux-amd64/LICENSE Nov 12 20:54:24.047559 tar[1466]: linux-amd64/README.md Nov 12 20:54:24.071796 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:54:24.411165 systemd-networkd[1397]: eth0: Gained IPv6LL Nov 12 20:54:24.414667 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:54:24.416489 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:54:24.426279 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:54:24.429037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:24.431301 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:54:24.451003 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:54:24.451307 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:54:24.453094 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:54:24.460008 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:54:25.678470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:25.682269 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:54:25.684538 systemd[1]: Startup finished in 903ms (kernel) + 6.310s (initrd) + 5.145s (userspace) = 12.358s. Nov 12 20:54:25.690065 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:26.568567 kubelet[1552]: E1112 20:54:26.568476 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:26.573421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:26.573681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:26.574213 systemd[1]: kubelet.service: Consumed 1.982s CPU time. Nov 12 20:54:27.874966 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:54:27.884183 systemd[1]: Started sshd@0-10.0.0.126:22-10.0.0.1:44896.service - OpenSSH per-connection server daemon (10.0.0.1:44896). Nov 12 20:54:27.925127 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 44896 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:54:27.927435 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:27.939664 systemd-logind[1450]: New session 1 of user core. Nov 12 20:54:27.941439 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:54:27.953349 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:54:27.968410 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:54:27.970708 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:54:27.982265 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:54:28.124640 systemd[1570]: Queued start job for default target default.target. Nov 12 20:54:28.142842 systemd[1570]: Created slice app.slice - User Application Slice. Nov 12 20:54:28.142905 systemd[1570]: Reached target paths.target - Paths. Nov 12 20:54:28.142925 systemd[1570]: Reached target timers.target - Timers. Nov 12 20:54:28.145034 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:54:28.158063 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:54:28.158240 systemd[1570]: Reached target sockets.target - Sockets. Nov 12 20:54:28.158265 systemd[1570]: Reached target basic.target - Basic System. Nov 12 20:54:28.158308 systemd[1570]: Reached target default.target - Main User Target. Nov 12 20:54:28.158346 systemd[1570]: Startup finished in 166ms. Nov 12 20:54:28.158759 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:54:28.168028 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:54:28.232919 systemd[1]: Started sshd@1-10.0.0.126:22-10.0.0.1:44904.service - OpenSSH per-connection server daemon (10.0.0.1:44904). Nov 12 20:54:28.277613 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 44904 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:54:28.280013 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:28.285528 systemd-logind[1450]: New session 2 of user core. Nov 12 20:54:28.299067 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:54:28.355358 sshd[1581]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:28.363016 systemd[1]: sshd@1-10.0.0.126:22-10.0.0.1:44904.service: Deactivated successfully. Nov 12 20:54:28.365240 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:54:28.367152 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:54:28.377165 systemd[1]: Started sshd@2-10.0.0.126:22-10.0.0.1:44908.service - OpenSSH per-connection server daemon (10.0.0.1:44908). Nov 12 20:54:28.378464 systemd-logind[1450]: Removed session 2. Nov 12 20:54:28.415835 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 44908 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:54:28.418233 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:28.423469 systemd-logind[1450]: New session 3 of user core. Nov 12 20:54:28.435061 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:54:28.487060 sshd[1588]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:28.505695 systemd[1]: sshd@2-10.0.0.126:22-10.0.0.1:44908.service: Deactivated successfully. Nov 12 20:54:28.508070 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:54:28.509743 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:54:28.523315 systemd[1]: Started sshd@3-10.0.0.126:22-10.0.0.1:44918.service - OpenSSH per-connection server daemon (10.0.0.1:44918). Nov 12 20:54:28.524395 systemd-logind[1450]: Removed session 3. Nov 12 20:54:28.555852 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 44918 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:54:28.557795 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:28.562308 systemd-logind[1450]: New session 4 of user core. Nov 12 20:54:28.577040 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:54:28.633194 sshd[1595]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:28.649038 systemd[1]: sshd@3-10.0.0.126:22-10.0.0.1:44918.service: Deactivated successfully. Nov 12 20:54:28.651065 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:54:28.652964 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:54:28.662209 systemd[1]: Started sshd@4-10.0.0.126:22-10.0.0.1:44932.service - OpenSSH per-connection server daemon (10.0.0.1:44932). Nov 12 20:54:28.663378 systemd-logind[1450]: Removed session 4. Nov 12 20:54:28.698456 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 44932 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:54:28.700210 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:28.705102 systemd-logind[1450]: New session 5 of user core. Nov 12 20:54:28.715022 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:54:28.783487 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:54:28.783886 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:54:28.805905 sudo[1605]: pam_unix(sudo:session): session closed for user root Nov 12 20:54:28.808480 sshd[1602]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:28.826967 systemd[1]: sshd@4-10.0.0.126:22-10.0.0.1:44932.service: Deactivated successfully. Nov 12 20:54:28.829762 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:54:28.832171 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:54:28.847332 systemd[1]: Started sshd@5-10.0.0.126:22-10.0.0.1:44934.service - OpenSSH per-connection server daemon (10.0.0.1:44934). Nov 12 20:54:28.848483 systemd-logind[1450]: Removed session 5. Nov 12 20:54:28.882716 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 44934 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:54:28.884497 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:28.888666 systemd-logind[1450]: New session 6 of user core. Nov 12 20:54:28.899006 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:54:28.955719 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:54:28.956201 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:54:28.960298 sudo[1614]: pam_unix(sudo:session): session closed for user root Nov 12 20:54:28.967451 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:54:28.967892 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:54:28.995361 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:54:28.997579 auditctl[1617]: No rules Nov 12 20:54:28.999287 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:54:28.999615 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:54:29.001970 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:54:29.036542 augenrules[1635]: No rules Nov 12 20:54:29.038506 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:54:29.039969 sudo[1613]: pam_unix(sudo:session): session closed for user root Nov 12 20:54:29.042029 sshd[1610]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:29.052454 systemd[1]: sshd@5-10.0.0.126:22-10.0.0.1:44934.service: Deactivated successfully. Nov 12 20:54:29.054475 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:54:29.056283 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:54:29.068157 systemd[1]: Started sshd@6-10.0.0.126:22-10.0.0.1:44950.service - OpenSSH per-connection server daemon (10.0.0.1:44950). Nov 12 20:54:29.069075 systemd-logind[1450]: Removed session 6. Nov 12 20:54:29.100685 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 44950 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:54:29.102589 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:29.107155 systemd-logind[1450]: New session 7 of user core. Nov 12 20:54:29.116998 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:54:29.171580 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:54:29.172050 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:54:29.953135 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:54:29.953381 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:54:30.845838 dockerd[1663]: time="2024-11-12T20:54:30.845728371Z" level=info msg="Starting up" Nov 12 20:54:31.610426 dockerd[1663]: time="2024-11-12T20:54:31.610364960Z" level=info msg="Loading containers: start." Nov 12 20:54:31.799899 kernel: Initializing XFRM netlink socket Nov 12 20:54:31.899773 systemd-networkd[1397]: docker0: Link UP Nov 12 20:54:32.220527 dockerd[1663]: time="2024-11-12T20:54:32.220364358Z" level=info msg="Loading containers: done." Nov 12 20:54:32.439216 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2877136227-merged.mount: Deactivated successfully. Nov 12 20:54:32.443985 dockerd[1663]: time="2024-11-12T20:54:32.443889977Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:54:32.444176 dockerd[1663]: time="2024-11-12T20:54:32.444137983Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:54:32.444354 dockerd[1663]: time="2024-11-12T20:54:32.444320686Z" level=info msg="Daemon has completed initialization" Nov 12 20:54:32.495537 dockerd[1663]: time="2024-11-12T20:54:32.495285866Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:54:32.495807 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:54:33.169917 containerd[1468]: time="2024-11-12T20:54:33.169847935Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 12 20:54:34.349454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165708210.mount: Deactivated successfully. Nov 12 20:54:35.693172 containerd[1468]: time="2024-11-12T20:54:35.693090981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:35.694741 containerd[1468]: time="2024-11-12T20:54:35.694703848Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=27975588" Nov 12 20:54:35.695938 containerd[1468]: time="2024-11-12T20:54:35.695883291Z" level=info msg="ImageCreate event name:\"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:35.700648 containerd[1468]: time="2024-11-12T20:54:35.700606616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:35.701568 containerd[1468]: time="2024-11-12T20:54:35.701514870Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"27972388\" in 2.531601082s" Nov 12 20:54:35.701568 containerd[1468]: time="2024-11-12T20:54:35.701555887Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\"" Nov 12 20:54:35.703431 containerd[1468]: time="2024-11-12T20:54:35.703404617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 12 20:54:36.824029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:54:36.875313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:37.117302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:37.125588 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:37.249900 kubelet[1878]: E1112 20:54:37.249768 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:37.258603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:37.258924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:38.059215 containerd[1468]: time="2024-11-12T20:54:38.059124368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:38.060558 containerd[1468]: time="2024-11-12T20:54:38.060507794Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=24701922" Nov 12 20:54:38.063013 containerd[1468]: time="2024-11-12T20:54:38.062903530Z" level=info msg="ImageCreate event name:\"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:38.069956 containerd[1468]: time="2024-11-12T20:54:38.069787109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:38.071310 containerd[1468]: time="2024-11-12T20:54:38.070964750Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"26147288\" in 2.36751104s" Nov 12 20:54:38.071310 containerd[1468]: time="2024-11-12T20:54:38.071023931Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\"" Nov 12 20:54:38.072165 containerd[1468]: time="2024-11-12T20:54:38.072102966Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 12 20:54:43.541223 containerd[1468]: time="2024-11-12T20:54:43.541135437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:43.542893 containerd[1468]: time="2024-11-12T20:54:43.542796454Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=18657606" Nov 12 20:54:43.544666 containerd[1468]: time="2024-11-12T20:54:43.544611540Z" level=info msg="ImageCreate event name:\"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:43.548039 containerd[1468]: time="2024-11-12T20:54:43.547959744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:43.549458 containerd[1468]: time="2024-11-12T20:54:43.549320268Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"20102990\" in 5.477173229s" Nov 12 20:54:43.549458 containerd[1468]: time="2024-11-12T20:54:43.549366995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\"" Nov 12 20:54:43.550234 containerd[1468]: time="2024-11-12T20:54:43.550182225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 12 20:54:45.199305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624292053.mount: Deactivated successfully. Nov 12 20:54:45.692973 containerd[1468]: time="2024-11-12T20:54:45.692783368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:45.693710 containerd[1468]: time="2024-11-12T20:54:45.693629757Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=30226814" Nov 12 20:54:45.696745 containerd[1468]: time="2024-11-12T20:54:45.696679380Z" level=info msg="ImageCreate event name:\"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:45.717629 containerd[1468]: time="2024-11-12T20:54:45.717563856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:45.718768 containerd[1468]: time="2024-11-12T20:54:45.718686793Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"30225833\" in 2.168448202s" Nov 12 20:54:45.718809 containerd[1468]: time="2024-11-12T20:54:45.718761974Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\"" Nov 12 20:54:45.719522 containerd[1468]: time="2024-11-12T20:54:45.719474912Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:54:46.449218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95779101.mount: Deactivated successfully. Nov 12 20:54:47.509232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:54:47.520188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:47.679611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:47.684375 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:47.756779 kubelet[1951]: E1112 20:54:47.756673 1951 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:47.763202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:47.763516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:48.534564 containerd[1468]: time="2024-11-12T20:54:48.534475635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:48.535661 containerd[1468]: time="2024-11-12T20:54:48.535612539Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:54:48.537415 containerd[1468]: time="2024-11-12T20:54:48.537350520Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:48.547122 containerd[1468]: time="2024-11-12T20:54:48.547038092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:48.548507 containerd[1468]: time="2024-11-12T20:54:48.548455963Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.828925025s" Nov 12 20:54:48.548569 containerd[1468]: time="2024-11-12T20:54:48.548505516Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:54:48.549248 containerd[1468]: time="2024-11-12T20:54:48.549222983Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 12 20:54:50.785411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160336130.mount: Deactivated successfully. Nov 12 20:54:50.791565 containerd[1468]: time="2024-11-12T20:54:50.791485398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:50.792950 containerd[1468]: time="2024-11-12T20:54:50.792897408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 12 20:54:50.794282 containerd[1468]: time="2024-11-12T20:54:50.794243124Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:50.799346 containerd[1468]: time="2024-11-12T20:54:50.799296728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:50.800213 containerd[1468]: time="2024-11-12T20:54:50.800166110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.250909284s" Nov 12 20:54:50.800213 containerd[1468]: time="2024-11-12T20:54:50.800205093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 12 20:54:50.800754 containerd[1468]: time="2024-11-12T20:54:50.800720941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 12 20:54:52.785805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723827773.mount: Deactivated successfully. Nov 12 20:54:54.663978 containerd[1468]: time="2024-11-12T20:54:54.663893087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:54.664839 containerd[1468]: time="2024-11-12T20:54:54.664753532Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779650" Nov 12 20:54:54.666155 containerd[1468]: time="2024-11-12T20:54:54.666098626Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:54.669436 containerd[1468]: time="2024-11-12T20:54:54.669377540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:54.670664 containerd[1468]: time="2024-11-12T20:54:54.670594093Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.8698402s" Nov 12 20:54:54.670664 containerd[1468]: time="2024-11-12T20:54:54.670645900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Nov 12 20:54:56.572575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:56.581137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:56.606821 systemd[1]: Reloading requested from client PID 2047 ('systemctl') (unit session-7.scope)... Nov 12 20:54:56.606843 systemd[1]: Reloading... Nov 12 20:54:56.697894 zram_generator::config[2086]: No configuration found. Nov 12 20:54:57.247543 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:57.326666 systemd[1]: Reloading finished in 719 ms. Nov 12 20:54:57.379014 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:54:57.379125 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:54:57.379443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:57.382789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:57.535688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:57.552213 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:54:57.592935 kubelet[2135]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:57.592935 kubelet[2135]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:54:57.592935 kubelet[2135]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:57.593360 kubelet[2135]: I1112 20:54:57.593012 2135 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:54:57.797234 kubelet[2135]: I1112 20:54:57.797077 2135 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:54:57.797234 kubelet[2135]: I1112 20:54:57.797114 2135 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:54:57.797422 kubelet[2135]: I1112 20:54:57.797392 2135 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:54:57.860031 kubelet[2135]: I1112 20:54:57.859957 2135 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:54:57.860664 kubelet[2135]: E1112 20:54:57.860625 2135 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:57.888626 kubelet[2135]: E1112 20:54:57.888555 2135 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:54:57.888626 kubelet[2135]: I1112 20:54:57.888608 2135 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:54:57.896923 kubelet[2135]: I1112 20:54:57.896873 2135 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:54:57.899805 kubelet[2135]: I1112 20:54:57.899767 2135 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:54:57.900091 kubelet[2135]: I1112 20:54:57.900039 2135 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:54:57.900303 kubelet[2135]: I1112 20:54:57.900084 2135 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:54:57.900392 kubelet[2135]: I1112 20:54:57.900334 2135 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:54:57.900392 kubelet[2135]: I1112 20:54:57.900346 2135 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:54:57.900519 kubelet[2135]: I1112 20:54:57.900501 2135 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:57.903063 kubelet[2135]: I1112 20:54:57.903036 2135 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:54:57.903125 kubelet[2135]: I1112 20:54:57.903068 2135 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:54:57.903125 kubelet[2135]: I1112 20:54:57.903126 2135 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:54:57.903188 kubelet[2135]: I1112 20:54:57.903162 2135 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:54:57.904994 kubelet[2135]: W1112 20:54:57.904800 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:54:57.904994 kubelet[2135]: E1112 20:54:57.904924 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:57.904994 kubelet[2135]: W1112 20:54:57.904961 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:54:57.904994 kubelet[2135]: E1112 20:54:57.905021 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:57.911016 kubelet[2135]: I1112 20:54:57.910986 2135 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:54:57.920240 kubelet[2135]: I1112 20:54:57.920192 2135 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:54:57.920407 kubelet[2135]: W1112 20:54:57.920315 2135 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:54:57.921096 kubelet[2135]: I1112 20:54:57.921071 2135 server.go:1269] "Started kubelet" Nov 12 20:54:57.921908 kubelet[2135]: I1112 20:54:57.921638 2135 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:54:57.921908 kubelet[2135]: I1112 20:54:57.921890 2135 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:54:57.922547 kubelet[2135]: I1112 20:54:57.922089 2135 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:54:57.923298 kubelet[2135]: I1112 20:54:57.922712 2135 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:54:57.923298 kubelet[2135]: I1112 20:54:57.922882 2135 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:54:57.923298 kubelet[2135]: I1112 20:54:57.922974 2135 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:54:57.924529 kubelet[2135]: E1112 20:54:57.924496 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:57.924610 kubelet[2135]: I1112 20:54:57.924555 2135 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:54:57.925696 kubelet[2135]: I1112 20:54:57.924764 2135 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:54:57.925696 kubelet[2135]: I1112 20:54:57.924890 2135 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:54:57.925696 kubelet[2135]: W1112 20:54:57.925199 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:54:57.925696 kubelet[2135]: E1112 20:54:57.925241 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:57.925696 kubelet[2135]: E1112 20:54:57.925319 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="200ms" Nov 12 20:54:57.926328 kubelet[2135]: E1112 20:54:57.926288 2135 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:54:57.926375 kubelet[2135]: I1112 20:54:57.926321 2135 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:54:57.926553 kubelet[2135]: I1112 20:54:57.926509 2135 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:54:57.927936 kubelet[2135]: I1112 20:54:57.927906 2135 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:54:57.931995 kubelet[2135]: E1112 20:54:57.929695 2135 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.126:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.126:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180753f608bf18ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:54:57.921046765 +0000 UTC m=+0.364521441,LastTimestamp:2024-11-12 20:54:57.921046765 +0000 UTC m=+0.364521441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:54:57.944671 kubelet[2135]: I1112 20:54:57.944575 2135 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:54:57.946338 kubelet[2135]: I1112 20:54:57.946286 2135 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:54:57.946338 kubelet[2135]: I1112 20:54:57.946336 2135 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:54:57.946501 kubelet[2135]: I1112 20:54:57.946369 2135 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:54:57.946501 kubelet[2135]: E1112 20:54:57.946419 2135 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:54:57.949413 kubelet[2135]: I1112 20:54:57.949364 2135 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:54:57.949413 kubelet[2135]: I1112 20:54:57.949393 2135 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:54:57.949559 kubelet[2135]: I1112 20:54:57.949423 2135 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:57.953094 kubelet[2135]: W1112 20:54:57.952999 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:54:57.953272 kubelet[2135]: E1112 20:54:57.953224 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:58.025247 kubelet[2135]: E1112 20:54:58.025176 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:58.046589 kubelet[2135]: E1112 20:54:58.046488 2135 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:54:58.125994 kubelet[2135]: E1112 20:54:58.125838 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:58.126282 kubelet[2135]: E1112 20:54:58.126204 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="400ms" Nov 12 20:54:58.226648 kubelet[2135]: E1112 20:54:58.226586 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:58.246941 kubelet[2135]: E1112 20:54:58.246850 2135 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:54:58.327439 kubelet[2135]: E1112 20:54:58.327365 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:58.427649 kubelet[2135]: E1112 20:54:58.427448 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:58.527343 kubelet[2135]: E1112 20:54:58.527273 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="800ms" Nov 12 20:54:58.528302 kubelet[2135]: E1112 20:54:58.528276 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:58.628972 kubelet[2135]: E1112 20:54:58.628899 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:58.647073 kubelet[2135]: E1112 20:54:58.647023 2135 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:54:58.729602 kubelet[2135]: E1112 20:54:58.729448 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:58.765309 kubelet[2135]: W1112 20:54:58.765256 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:54:58.765457 kubelet[2135]: E1112 20:54:58.765311 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:58.830037 kubelet[2135]: E1112 20:54:58.829981 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:58.854953 kubelet[2135]: W1112 20:54:58.854912 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:54:58.855032 kubelet[2135]: E1112 20:54:58.854958 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:58.930916 kubelet[2135]: E1112 20:54:58.930820 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.031820 kubelet[2135]: E1112 20:54:59.031602 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.132175 kubelet[2135]: E1112 20:54:59.132122 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.163058 kubelet[2135]: W1112 20:54:59.162966 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:54:59.163175 kubelet[2135]: E1112 20:54:59.163067 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:59.228472 kubelet[2135]: W1112 20:54:59.228376 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:54:59.228472 kubelet[2135]: E1112 20:54:59.228463 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:59.232848 kubelet[2135]: E1112 20:54:59.232800 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.329005 kubelet[2135]: E1112 20:54:59.328831 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="1.6s" Nov 12 20:54:59.333969 kubelet[2135]: E1112 20:54:59.333934 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.434062 kubelet[2135]: E1112 20:54:59.433998 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.445761 kubelet[2135]: E1112 20:54:59.445642 2135 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.126:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.126:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180753f608bf18ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:54:57.921046765 +0000 UTC m=+0.364521441,LastTimestamp:2024-11-12 20:54:57.921046765 +0000 UTC m=+0.364521441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:54:59.447801 kubelet[2135]: E1112 20:54:59.447755 2135 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:54:59.534452 kubelet[2135]: E1112 20:54:59.534375 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.635154 kubelet[2135]: E1112 20:54:59.634966 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.735477 kubelet[2135]: E1112 20:54:59.735405 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.835833 kubelet[2135]: E1112 20:54:59.835770 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:59.936397 kubelet[2135]: E1112 20:54:59.936238 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:00.036975 kubelet[2135]: E1112 20:55:00.036900 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:00.045643 kubelet[2135]: E1112 20:55:00.045612 2135 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:00.137262 kubelet[2135]: E1112 20:55:00.137217 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:00.140069 kubelet[2135]: I1112 20:55:00.140026 2135 policy_none.go:49] "None policy: Start" Nov 12 20:55:00.140916 kubelet[2135]: I1112 20:55:00.140893 2135 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:55:00.140983 kubelet[2135]: I1112 20:55:00.140931 2135 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:55:00.237679 kubelet[2135]: E1112 20:55:00.237514 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:00.337984 kubelet[2135]: E1112 20:55:00.337915 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:00.439037 kubelet[2135]: E1112 20:55:00.438978 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:00.506939 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:55:00.518526 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:55:00.522208 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:55:00.530821 kubelet[2135]: I1112 20:55:00.530778 2135 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:55:00.531100 kubelet[2135]: I1112 20:55:00.531082 2135 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:55:00.531186 kubelet[2135]: I1112 20:55:00.531103 2135 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:55:00.531373 kubelet[2135]: I1112 20:55:00.531351 2135 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:55:00.532240 kubelet[2135]: E1112 20:55:00.532208 2135 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:55:00.633460 kubelet[2135]: I1112 20:55:00.633407 2135 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:00.633842 kubelet[2135]: E1112 20:55:00.633811 2135 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Nov 12 20:55:00.835595 kubelet[2135]: I1112 20:55:00.835427 2135 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:00.836065 kubelet[2135]: E1112 20:55:00.835835 2135 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Nov 12 20:55:00.929990 kubelet[2135]: E1112 20:55:00.929922 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="3.2s" Nov 12 20:55:01.059296 systemd[1]: Created slice kubepods-burstable-podc1b7f35c99978afe851c215921ed97f3.slice - libcontainer container kubepods-burstable-podc1b7f35c99978afe851c215921ed97f3.slice. Nov 12 20:55:01.079634 systemd[1]: Created slice kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice - libcontainer container kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice. Nov 12 20:55:01.091486 systemd[1]: Created slice kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice - libcontainer container kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice. Nov 12 20:55:01.131970 kubelet[2135]: W1112 20:55:01.131858 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:55:01.131970 kubelet[2135]: E1112 20:55:01.131970 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:01.142530 kubelet[2135]: I1112 20:55:01.142437 2135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1b7f35c99978afe851c215921ed97f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1b7f35c99978afe851c215921ed97f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:01.142530 kubelet[2135]: I1112 20:55:01.142518 2135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1b7f35c99978afe851c215921ed97f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c1b7f35c99978afe851c215921ed97f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:01.142721 kubelet[2135]: I1112 20:55:01.142546 2135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:01.142721 kubelet[2135]: I1112 20:55:01.142634 2135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:01.142721 kubelet[2135]: I1112 20:55:01.142711 2135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:55:01.142822 kubelet[2135]: I1112 20:55:01.142737 2135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1b7f35c99978afe851c215921ed97f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1b7f35c99978afe851c215921ed97f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:01.142822 kubelet[2135]: I1112 20:55:01.142762 2135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:01.142822 kubelet[2135]: I1112 20:55:01.142800 2135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:01.142822 kubelet[2135]: I1112 20:55:01.142822 2135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:01.238838 kubelet[2135]: I1112 20:55:01.238483 2135 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:01.239007 kubelet[2135]: E1112 20:55:01.238963 2135 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Nov 12 20:55:01.377334 kubelet[2135]: E1112 20:55:01.377170 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:01.377977 containerd[1468]: time="2024-11-12T20:55:01.377907450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c1b7f35c99978afe851c215921ed97f3,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:01.389359 kubelet[2135]: E1112 20:55:01.389318 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:01.389927 containerd[1468]: time="2024-11-12T20:55:01.389885894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:01.394112 kubelet[2135]: E1112 20:55:01.394071 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:01.394663 containerd[1468]: time="2024-11-12T20:55:01.394621137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:01.541894 kubelet[2135]: W1112 20:55:01.541798 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:55:01.541894 kubelet[2135]: E1112 20:55:01.541896 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:01.765188 kubelet[2135]: W1112 20:55:01.765000 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:55:01.765188 kubelet[2135]: E1112 20:55:01.765072 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:02.040417 kubelet[2135]: I1112 20:55:02.040275 2135 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:02.040842 kubelet[2135]: E1112 20:55:02.040759 2135 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Nov 12 20:55:02.075642 kubelet[2135]: W1112 20:55:02.075554 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:55:02.075642 kubelet[2135]: E1112 20:55:02.075639 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:03.642849 kubelet[2135]: I1112 20:55:03.642797 2135 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:03.643292 kubelet[2135]: E1112 20:55:03.643174 2135 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Nov 12 20:55:04.131309 kubelet[2135]: E1112 20:55:04.131144 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="6.4s" Nov 12 20:55:04.226305 kubelet[2135]: E1112 20:55:04.226245 2135 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:04.805823 kubelet[2135]: W1112 20:55:04.805743 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:55:04.805823 kubelet[2135]: E1112 20:55:04.805809 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:05.230736 kubelet[2135]: W1112 20:55:05.230572 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:55:05.230736 kubelet[2135]: E1112 20:55:05.230643 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:05.702857 kubelet[2135]: W1112 20:55:05.702654 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:55:05.702857 kubelet[2135]: E1112 20:55:05.702721 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:06.314542 kubelet[2135]: W1112 20:55:06.314493 2135 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Nov 12 20:55:06.315092 kubelet[2135]: E1112 20:55:06.314555 2135 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:06.399298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3587812167.mount: Deactivated successfully. Nov 12 20:55:06.415918 containerd[1468]: time="2024-11-12T20:55:06.415821930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:06.417544 containerd[1468]: time="2024-11-12T20:55:06.417462067Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:06.418686 containerd[1468]: time="2024-11-12T20:55:06.418614948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:55:06.420114 containerd[1468]: time="2024-11-12T20:55:06.420052210Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:06.421771 containerd[1468]: time="2024-11-12T20:55:06.421631963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:55:06.423558 containerd[1468]: time="2024-11-12T20:55:06.423497879Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:06.424857 containerd[1468]: time="2024-11-12T20:55:06.424809041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:55:06.428333 containerd[1468]: time="2024-11-12T20:55:06.428260611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:06.429223 containerd[1468]: time="2024-11-12T20:55:06.429158227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.051170273s" Nov 12 20:55:06.432134 containerd[1468]: time="2024-11-12T20:55:06.431806741Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.04182834s" Nov 12 20:55:06.435735 containerd[1468]: time="2024-11-12T20:55:06.435660024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.040955538s" Nov 12 20:55:06.692699 containerd[1468]: time="2024-11-12T20:55:06.691782843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:06.692699 containerd[1468]: time="2024-11-12T20:55:06.691947837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:06.692699 containerd[1468]: time="2024-11-12T20:55:06.691978576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:06.692699 containerd[1468]: time="2024-11-12T20:55:06.692070340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:06.695736 containerd[1468]: time="2024-11-12T20:55:06.695276654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:06.695736 containerd[1468]: time="2024-11-12T20:55:06.695342389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:06.695736 containerd[1468]: time="2024-11-12T20:55:06.695361335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:06.695736 containerd[1468]: time="2024-11-12T20:55:06.695460072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:06.700492 containerd[1468]: time="2024-11-12T20:55:06.700198649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:06.700492 containerd[1468]: time="2024-11-12T20:55:06.700290303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:06.700492 containerd[1468]: time="2024-11-12T20:55:06.700308247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:06.700492 containerd[1468]: time="2024-11-12T20:55:06.700402917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:06.769117 systemd[1]: Started cri-containerd-9c5572adcf6747c0b20b6df73fe2aeeaaf58adef5c8653a91bf69e9dff3b78d2.scope - libcontainer container 9c5572adcf6747c0b20b6df73fe2aeeaaf58adef5c8653a91bf69e9dff3b78d2. Nov 12 20:55:06.774665 systemd[1]: Started cri-containerd-2fd99953c139bafc6bc1a11ead9e69b749d71ba0a6221be204f1e293cc27c5a0.scope - libcontainer container 2fd99953c139bafc6bc1a11ead9e69b749d71ba0a6221be204f1e293cc27c5a0. Nov 12 20:55:06.777297 systemd[1]: Started cri-containerd-5c082b6199fbf9d153e7db9195ac513e490d4c7d83d544f2ef591fddb7318429.scope - libcontainer container 5c082b6199fbf9d153e7db9195ac513e490d4c7d83d544f2ef591fddb7318429. Nov 12 20:55:06.845020 kubelet[2135]: I1112 20:55:06.844977 2135 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:06.845917 kubelet[2135]: E1112 20:55:06.845885 2135 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Nov 12 20:55:06.869279 containerd[1468]: time="2024-11-12T20:55:06.869204613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c1b7f35c99978afe851c215921ed97f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fd99953c139bafc6bc1a11ead9e69b749d71ba0a6221be204f1e293cc27c5a0\"" Nov 12 20:55:06.870479 kubelet[2135]: E1112 20:55:06.870403 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:06.871408 containerd[1468]: time="2024-11-12T20:55:06.871376590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c5572adcf6747c0b20b6df73fe2aeeaaf58adef5c8653a91bf69e9dff3b78d2\"" Nov 12 20:55:06.872358 kubelet[2135]: E1112 20:55:06.872331 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:06.874693 containerd[1468]: time="2024-11-12T20:55:06.874662386Z" level=info msg="CreateContainer within sandbox \"2fd99953c139bafc6bc1a11ead9e69b749d71ba0a6221be204f1e293cc27c5a0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:55:06.875268 containerd[1468]: time="2024-11-12T20:55:06.875215968Z" level=info msg="CreateContainer within sandbox \"9c5572adcf6747c0b20b6df73fe2aeeaaf58adef5c8653a91bf69e9dff3b78d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:55:06.881059 containerd[1468]: time="2024-11-12T20:55:06.881010100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c082b6199fbf9d153e7db9195ac513e490d4c7d83d544f2ef591fddb7318429\"" Nov 12 20:55:06.881918 kubelet[2135]: E1112 20:55:06.881892 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:06.883199 containerd[1468]: time="2024-11-12T20:55:06.883169795Z" level=info msg="CreateContainer within sandbox \"5c082b6199fbf9d153e7db9195ac513e490d4c7d83d544f2ef591fddb7318429\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:55:07.140356 containerd[1468]: time="2024-11-12T20:55:07.140177254Z" level=info msg="CreateContainer within sandbox \"9c5572adcf6747c0b20b6df73fe2aeeaaf58adef5c8653a91bf69e9dff3b78d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2492f446fdf41598d4d0f83e60aa0cd1a17e264224deae74f99ba52b6abdc93c\"" Nov 12 20:55:07.141176 containerd[1468]: time="2024-11-12T20:55:07.141122199Z" level=info msg="StartContainer for \"2492f446fdf41598d4d0f83e60aa0cd1a17e264224deae74f99ba52b6abdc93c\"" Nov 12 20:55:07.145882 containerd[1468]: time="2024-11-12T20:55:07.145786898Z" level=info msg="CreateContainer within sandbox \"2fd99953c139bafc6bc1a11ead9e69b749d71ba0a6221be204f1e293cc27c5a0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e2087fe93d21acffda08be55621c8c53e4e98b2b5f43c0635e27a5887f395f7\"" Nov 12 20:55:07.146687 containerd[1468]: time="2024-11-12T20:55:07.146617394Z" level=info msg="StartContainer for \"8e2087fe93d21acffda08be55621c8c53e4e98b2b5f43c0635e27a5887f395f7\"" Nov 12 20:55:07.148770 containerd[1468]: time="2024-11-12T20:55:07.148723074Z" level=info msg="CreateContainer within sandbox \"5c082b6199fbf9d153e7db9195ac513e490d4c7d83d544f2ef591fddb7318429\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7c6e8424ea4a316a8bf93de14baa843e4bf81e11f86bb39d4a9bbba10c4786e\"" Nov 12 20:55:07.149899 containerd[1468]: time="2024-11-12T20:55:07.149495220Z" level=info msg="StartContainer for \"a7c6e8424ea4a316a8bf93de14baa843e4bf81e11f86bb39d4a9bbba10c4786e\"" Nov 12 20:55:07.217207 systemd[1]: Started cri-containerd-a7c6e8424ea4a316a8bf93de14baa843e4bf81e11f86bb39d4a9bbba10c4786e.scope - libcontainer container a7c6e8424ea4a316a8bf93de14baa843e4bf81e11f86bb39d4a9bbba10c4786e. Nov 12 20:55:07.221414 systemd[1]: Started cri-containerd-8e2087fe93d21acffda08be55621c8c53e4e98b2b5f43c0635e27a5887f395f7.scope - libcontainer container 8e2087fe93d21acffda08be55621c8c53e4e98b2b5f43c0635e27a5887f395f7. Nov 12 20:55:07.234196 systemd[1]: Started cri-containerd-2492f446fdf41598d4d0f83e60aa0cd1a17e264224deae74f99ba52b6abdc93c.scope - libcontainer container 2492f446fdf41598d4d0f83e60aa0cd1a17e264224deae74f99ba52b6abdc93c. Nov 12 20:55:07.534111 containerd[1468]: time="2024-11-12T20:55:07.533663101Z" level=info msg="StartContainer for \"8e2087fe93d21acffda08be55621c8c53e4e98b2b5f43c0635e27a5887f395f7\" returns successfully" Nov 12 20:55:07.534111 containerd[1468]: time="2024-11-12T20:55:07.533668130Z" level=info msg="StartContainer for \"2492f446fdf41598d4d0f83e60aa0cd1a17e264224deae74f99ba52b6abdc93c\" returns successfully" Nov 12 20:55:07.534111 containerd[1468]: time="2024-11-12T20:55:07.533673580Z" level=info msg="StartContainer for \"a7c6e8424ea4a316a8bf93de14baa843e4bf81e11f86bb39d4a9bbba10c4786e\" returns successfully" Nov 12 20:55:07.972446 kubelet[2135]: E1112 20:55:07.972274 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:07.975723 kubelet[2135]: E1112 20:55:07.975692 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:07.983399 kubelet[2135]: E1112 20:55:07.983322 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:08.327271 update_engine[1460]: I20241112 20:55:08.327075 1460 update_attempter.cc:509] Updating boot flags... Nov 12 20:55:08.400040 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2420) Nov 12 20:55:08.491917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2424) Nov 12 20:55:08.628572 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2424) Nov 12 20:55:08.985277 kubelet[2135]: E1112 20:55:08.985234 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:08.985820 kubelet[2135]: E1112 20:55:08.985752 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:08.986129 kubelet[2135]: E1112 20:55:08.986087 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:09.450921 kubelet[2135]: E1112 20:55:09.450724 2135 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:55:09.908345 kubelet[2135]: E1112 20:55:09.908134 2135 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:55:09.986471 kubelet[2135]: E1112 20:55:09.986434 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:09.987664 kubelet[2135]: E1112 20:55:09.987621 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:10.519353 kubelet[2135]: E1112 20:55:10.519310 2135 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:55:10.532664 kubelet[2135]: E1112 20:55:10.532567 2135 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:55:10.562387 kubelet[2135]: E1112 20:55:10.562311 2135 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 20:55:10.988082 kubelet[2135]: E1112 20:55:10.988034 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:11.580019 kubelet[2135]: E1112 20:55:11.579965 2135 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:55:13.248217 kubelet[2135]: I1112 20:55:13.248176 2135 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:13.252944 kubelet[2135]: I1112 20:55:13.252896 2135 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 20:55:13.252944 kubelet[2135]: E1112 20:55:13.252944 2135 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 12 20:55:13.314629 kubelet[2135]: E1112 20:55:13.314564 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:13.415801 kubelet[2135]: E1112 20:55:13.415733 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:13.516842 kubelet[2135]: E1112 20:55:13.516669 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:13.617617 kubelet[2135]: E1112 20:55:13.617550 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:13.718353 kubelet[2135]: E1112 20:55:13.718297 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:13.819143 kubelet[2135]: E1112 20:55:13.818961 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:13.919320 kubelet[2135]: E1112 20:55:13.919253 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:14.019430 kubelet[2135]: E1112 20:55:14.019365 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:14.119950 kubelet[2135]: E1112 20:55:14.119758 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:14.220358 kubelet[2135]: E1112 20:55:14.220302 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:14.304207 systemd[1]: Reloading requested from client PID 2430 ('systemctl') (unit session-7.scope)... Nov 12 20:55:14.304229 systemd[1]: Reloading... Nov 12 20:55:14.320745 kubelet[2135]: E1112 20:55:14.320704 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:14.383899 zram_generator::config[2473]: No configuration found. Nov 12 20:55:14.421532 kubelet[2135]: E1112 20:55:14.421482 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:14.502630 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:14.522747 kubelet[2135]: E1112 20:55:14.522225 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:14.597549 systemd[1]: Reloading finished in 292 ms. Nov 12 20:55:14.623053 kubelet[2135]: E1112 20:55:14.623009 2135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:14.645006 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:14.666702 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:55:14.667067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:14.677211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:14.842392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:14.848310 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:55:14.911171 kubelet[2514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:55:14.911171 kubelet[2514]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:55:14.911171 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:55:14.911620 kubelet[2514]: I1112 20:55:14.911211 2514 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:55:14.917818 kubelet[2514]: I1112 20:55:14.917773 2514 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:55:14.917818 kubelet[2514]: I1112 20:55:14.917802 2514 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:55:14.918098 kubelet[2514]: I1112 20:55:14.918075 2514 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:55:14.919289 kubelet[2514]: I1112 20:55:14.919269 2514 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:55:14.921189 kubelet[2514]: I1112 20:55:14.921157 2514 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:55:14.925153 kubelet[2514]: E1112 20:55:14.925116 2514 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:55:14.925153 kubelet[2514]: I1112 20:55:14.925140 2514 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:55:14.930797 kubelet[2514]: I1112 20:55:14.930690 2514 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:55:14.930913 kubelet[2514]: I1112 20:55:14.930818 2514 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:55:14.931007 kubelet[2514]: I1112 20:55:14.930982 2514 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:55:14.931161 kubelet[2514]: I1112 20:55:14.931007 2514 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:55:14.931242 kubelet[2514]: I1112 20:55:14.931169 2514 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:55:14.931242 kubelet[2514]: I1112 20:55:14.931178 2514 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:55:14.931242 kubelet[2514]: I1112 20:55:14.931217 2514 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:55:14.931357 kubelet[2514]: I1112 20:55:14.931347 2514 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:55:14.931391 kubelet[2514]: I1112 20:55:14.931360 2514 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:55:14.931471 kubelet[2514]: I1112 20:55:14.931392 2514 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:55:14.931471 kubelet[2514]: I1112 20:55:14.931406 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:55:14.932970 kubelet[2514]: I1112 20:55:14.932931 2514 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:55:14.933545 kubelet[2514]: I1112 20:55:14.933507 2514 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:55:14.934289 kubelet[2514]: I1112 20:55:14.934263 2514 server.go:1269] "Started kubelet" Nov 12 20:55:14.936277 kubelet[2514]: I1112 20:55:14.935769 2514 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:55:14.936277 kubelet[2514]: I1112 20:55:14.936019 2514 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:55:14.937201 kubelet[2514]: I1112 20:55:14.937165 2514 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:55:14.937445 kubelet[2514]: I1112 20:55:14.937403 2514 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:55:14.940703 kubelet[2514]: I1112 20:55:14.940684 2514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:55:14.942665 kubelet[2514]: E1112 20:55:14.942635 2514 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:55:14.942804 kubelet[2514]: I1112 20:55:14.942786 2514 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:55:14.944806 kubelet[2514]: I1112 20:55:14.944766 2514 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:55:14.945029 kubelet[2514]: I1112 20:55:14.944924 2514 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:55:14.945272 kubelet[2514]: I1112 20:55:14.945246 2514 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:55:14.946704 kubelet[2514]: I1112 20:55:14.946683 2514 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:55:14.947119 kubelet[2514]: I1112 20:55:14.947008 2514 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:55:14.948309 kubelet[2514]: I1112 20:55:14.948290 2514 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:55:14.954075 kubelet[2514]: I1112 20:55:14.954012 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:55:14.955720 kubelet[2514]: I1112 20:55:14.955699 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:55:14.956199 kubelet[2514]: I1112 20:55:14.955825 2514 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:55:14.956199 kubelet[2514]: I1112 20:55:14.955849 2514 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:55:14.956199 kubelet[2514]: E1112 20:55:14.955929 2514 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:55:14.992092 kubelet[2514]: I1112 20:55:14.992025 2514 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:55:14.992092 kubelet[2514]: I1112 20:55:14.992054 2514 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:55:14.992092 kubelet[2514]: I1112 20:55:14.992082 2514 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:55:14.992292 kubelet[2514]: I1112 20:55:14.992267 2514 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:55:14.992365 kubelet[2514]: I1112 20:55:14.992281 2514 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:55:14.992365 kubelet[2514]: I1112 20:55:14.992330 2514 policy_none.go:49] "None policy: Start" Nov 12 20:55:14.993116 kubelet[2514]: I1112 20:55:14.993093 2514 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:55:14.993194 kubelet[2514]: I1112 20:55:14.993127 2514 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:55:14.993311 kubelet[2514]: I1112 20:55:14.993297 2514 state_mem.go:75] "Updated machine memory state" Nov 12 20:55:14.998675 kubelet[2514]: I1112 20:55:14.998628 2514 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:55:14.999297 kubelet[2514]: I1112 20:55:14.998956 2514 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:55:14.999297 kubelet[2514]: I1112 20:55:14.998983 2514 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:55:14.999297 kubelet[2514]: I1112 20:55:14.999227 2514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:55:15.106190 kubelet[2514]: I1112 20:55:15.106133 2514 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:15.145577 kubelet[2514]: I1112 20:55:15.145490 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1b7f35c99978afe851c215921ed97f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1b7f35c99978afe851c215921ed97f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:15.145577 kubelet[2514]: I1112 20:55:15.145527 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1b7f35c99978afe851c215921ed97f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1b7f35c99978afe851c215921ed97f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:15.145577 kubelet[2514]: I1112 20:55:15.145568 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:15.145577 kubelet[2514]: I1112 20:55:15.145592 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:15.145942 kubelet[2514]: I1112 20:55:15.145612 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:55:15.145942 kubelet[2514]: I1112 20:55:15.145635 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1b7f35c99978afe851c215921ed97f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c1b7f35c99978afe851c215921ed97f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:15.145942 kubelet[2514]: I1112 20:55:15.145656 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:15.145942 kubelet[2514]: I1112 20:55:15.145675 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:15.145942 kubelet[2514]: I1112 20:55:15.145698 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:15.302207 kubelet[2514]: I1112 20:55:15.302156 2514 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Nov 12 20:55:15.302397 kubelet[2514]: I1112 20:55:15.302296 2514 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 20:55:15.527664 kubelet[2514]: E1112 20:55:15.527605 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:15.527664 kubelet[2514]: E1112 20:55:15.527649 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:15.527664 kubelet[2514]: E1112 20:55:15.527605 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:15.932763 kubelet[2514]: I1112 20:55:15.932696 2514 apiserver.go:52] "Watching apiserver" Nov 12 20:55:15.945654 kubelet[2514]: I1112 20:55:15.945587 2514 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:55:15.970668 kubelet[2514]: E1112 20:55:15.970552 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:15.978063 kubelet[2514]: E1112 20:55:15.977959 2514 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 12 20:55:15.978273 kubelet[2514]: E1112 20:55:15.978191 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:15.979282 kubelet[2514]: E1112 20:55:15.979254 2514 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:15.979512 kubelet[2514]: E1112 20:55:15.979453 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:15.994240 kubelet[2514]: I1112 20:55:15.994129 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.994098159 podStartE2EDuration="994.098159ms" podCreationTimestamp="2024-11-12 20:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:15.994023668 +0000 UTC m=+1.140654086" watchObservedRunningTime="2024-11-12 20:55:15.994098159 +0000 UTC m=+1.140728577" Nov 12 20:55:16.005220 kubelet[2514]: I1112 20:55:16.005146 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.00512603 podStartE2EDuration="1.00512603s" podCreationTimestamp="2024-11-12 20:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:16.004850989 +0000 UTC m=+1.151481407" watchObservedRunningTime="2024-11-12 20:55:16.00512603 +0000 UTC m=+1.151756448" Nov 12 20:55:16.033306 kubelet[2514]: I1112 20:55:16.033080 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.033050707 podStartE2EDuration="1.033050707s" podCreationTimestamp="2024-11-12 20:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:16.020494885 +0000 UTC m=+1.167125303" watchObservedRunningTime="2024-11-12 20:55:16.033050707 +0000 UTC m=+1.179681125" Nov 12 20:55:16.971420 kubelet[2514]: E1112 20:55:16.971379 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:16.971975 kubelet[2514]: E1112 20:55:16.971511 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:19.283451 kubelet[2514]: I1112 20:55:19.283297 2514 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:55:19.284429 containerd[1468]: time="2024-11-12T20:55:19.284285389Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:55:19.286985 kubelet[2514]: I1112 20:55:19.286112 2514 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:55:19.708020 systemd[1]: Created slice kubepods-besteffort-podbd19ad79_cd0b_4d3b_9708_9f618811dc28.slice - libcontainer container kubepods-besteffort-podbd19ad79_cd0b_4d3b_9708_9f618811dc28.slice. Nov 12 20:55:19.773209 kubelet[2514]: I1112 20:55:19.773151 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd19ad79-cd0b-4d3b-9708-9f618811dc28-kube-proxy\") pod \"kube-proxy-29jxp\" (UID: \"bd19ad79-cd0b-4d3b-9708-9f618811dc28\") " pod="kube-system/kube-proxy-29jxp" Nov 12 20:55:19.773209 kubelet[2514]: I1112 20:55:19.773201 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd19ad79-cd0b-4d3b-9708-9f618811dc28-xtables-lock\") pod \"kube-proxy-29jxp\" (UID: \"bd19ad79-cd0b-4d3b-9708-9f618811dc28\") " pod="kube-system/kube-proxy-29jxp" Nov 12 20:55:19.773209 kubelet[2514]: I1112 20:55:19.773217 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd19ad79-cd0b-4d3b-9708-9f618811dc28-lib-modules\") pod \"kube-proxy-29jxp\" (UID: \"bd19ad79-cd0b-4d3b-9708-9f618811dc28\") " pod="kube-system/kube-proxy-29jxp" Nov 12 20:55:19.773457 kubelet[2514]: I1112 20:55:19.773231 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfd4w\" (UniqueName: \"kubernetes.io/projected/bd19ad79-cd0b-4d3b-9708-9f618811dc28-kube-api-access-pfd4w\") pod \"kube-proxy-29jxp\" (UID: \"bd19ad79-cd0b-4d3b-9708-9f618811dc28\") " pod="kube-system/kube-proxy-29jxp" Nov 12 20:55:19.895719 kubelet[2514]: E1112 20:55:19.895672 2514 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 20:55:19.895719 kubelet[2514]: E1112 20:55:19.895709 2514 projected.go:194] Error preparing data for projected volume kube-api-access-pfd4w for pod kube-system/kube-proxy-29jxp: configmap "kube-root-ca.crt" not found Nov 12 20:55:19.895947 kubelet[2514]: E1112 20:55:19.895788 2514 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd19ad79-cd0b-4d3b-9708-9f618811dc28-kube-api-access-pfd4w podName:bd19ad79-cd0b-4d3b-9708-9f618811dc28 nodeName:}" failed. No retries permitted until 2024-11-12 20:55:20.395761423 +0000 UTC m=+5.542391841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfd4w" (UniqueName: "kubernetes.io/projected/bd19ad79-cd0b-4d3b-9708-9f618811dc28-kube-api-access-pfd4w") pod "kube-proxy-29jxp" (UID: "bd19ad79-cd0b-4d3b-9708-9f618811dc28") : configmap "kube-root-ca.crt" not found Nov 12 20:55:20.296891 systemd[1]: Created slice kubepods-besteffort-pod51ff5bd5_341e_4b97_abd2_dbddd29f0eb6.slice - libcontainer container kubepods-besteffort-pod51ff5bd5_341e_4b97_abd2_dbddd29f0eb6.slice. Nov 12 20:55:20.377571 kubelet[2514]: I1112 20:55:20.377493 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/51ff5bd5-341e-4b97-abd2-dbddd29f0eb6-var-lib-calico\") pod \"tigera-operator-f8bc97d4c-wddpd\" (UID: \"51ff5bd5-341e-4b97-abd2-dbddd29f0eb6\") " pod="tigera-operator/tigera-operator-f8bc97d4c-wddpd" Nov 12 20:55:20.377571 kubelet[2514]: I1112 20:55:20.377582 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqbx9\" (UniqueName: \"kubernetes.io/projected/51ff5bd5-341e-4b97-abd2-dbddd29f0eb6-kube-api-access-fqbx9\") pod \"tigera-operator-f8bc97d4c-wddpd\" (UID: \"51ff5bd5-341e-4b97-abd2-dbddd29f0eb6\") " pod="tigera-operator/tigera-operator-f8bc97d4c-wddpd" Nov 12 20:55:20.603410 containerd[1468]: time="2024-11-12T20:55:20.603279379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-wddpd,Uid:51ff5bd5-341e-4b97-abd2-dbddd29f0eb6,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:55:20.625038 kubelet[2514]: E1112 20:55:20.624985 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:20.626603 containerd[1468]: time="2024-11-12T20:55:20.625536675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29jxp,Uid:bd19ad79-cd0b-4d3b-9708-9f618811dc28,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:20.676959 containerd[1468]: time="2024-11-12T20:55:20.676674503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:20.676959 containerd[1468]: time="2024-11-12T20:55:20.676728726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:20.676959 containerd[1468]: time="2024-11-12T20:55:20.676738865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:20.676959 containerd[1468]: time="2024-11-12T20:55:20.676834185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:20.685171 containerd[1468]: time="2024-11-12T20:55:20.685050875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:20.685171 containerd[1468]: time="2024-11-12T20:55:20.685136506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:20.685171 containerd[1468]: time="2024-11-12T20:55:20.685150473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:20.685348 containerd[1468]: time="2024-11-12T20:55:20.685246965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:20.706088 systemd[1]: Started cri-containerd-fb3eec08c34292827704ff73f058f4898806f8a34a5a65c6cffbc207df4e87a2.scope - libcontainer container fb3eec08c34292827704ff73f058f4898806f8a34a5a65c6cffbc207df4e87a2. Nov 12 20:55:20.710353 systemd[1]: Started cri-containerd-219d6f85d083f070e55523e7792c95d85f49acfaa76dabca4d2e3d532d76064f.scope - libcontainer container 219d6f85d083f070e55523e7792c95d85f49acfaa76dabca4d2e3d532d76064f. Nov 12 20:55:20.737102 containerd[1468]: time="2024-11-12T20:55:20.737037074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29jxp,Uid:bd19ad79-cd0b-4d3b-9708-9f618811dc28,Namespace:kube-system,Attempt:0,} returns sandbox id \"219d6f85d083f070e55523e7792c95d85f49acfaa76dabca4d2e3d532d76064f\"" Nov 12 20:55:20.737927 kubelet[2514]: E1112 20:55:20.737645 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:20.742187 containerd[1468]: time="2024-11-12T20:55:20.742052816Z" level=info msg="CreateContainer within sandbox \"219d6f85d083f070e55523e7792c95d85f49acfaa76dabca4d2e3d532d76064f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:55:20.752486 containerd[1468]: time="2024-11-12T20:55:20.752372623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-wddpd,Uid:51ff5bd5-341e-4b97-abd2-dbddd29f0eb6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fb3eec08c34292827704ff73f058f4898806f8a34a5a65c6cffbc207df4e87a2\"" Nov 12 20:55:20.754241 containerd[1468]: time="2024-11-12T20:55:20.754220309Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:55:20.769246 containerd[1468]: time="2024-11-12T20:55:20.769195338Z" level=info msg="CreateContainer within sandbox \"219d6f85d083f070e55523e7792c95d85f49acfaa76dabca4d2e3d532d76064f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b4d593340faac783d39c61c86aa974270b428f661a53d9ad76773ca4496d9bc\"" Nov 12 20:55:20.769818 containerd[1468]: time="2024-11-12T20:55:20.769791523Z" level=info msg="StartContainer for \"6b4d593340faac783d39c61c86aa974270b428f661a53d9ad76773ca4496d9bc\"" Nov 12 20:55:20.804024 systemd[1]: Started cri-containerd-6b4d593340faac783d39c61c86aa974270b428f661a53d9ad76773ca4496d9bc.scope - libcontainer container 6b4d593340faac783d39c61c86aa974270b428f661a53d9ad76773ca4496d9bc. Nov 12 20:55:20.902669 containerd[1468]: time="2024-11-12T20:55:20.902518644Z" level=info msg="StartContainer for \"6b4d593340faac783d39c61c86aa974270b428f661a53d9ad76773ca4496d9bc\" returns successfully" Nov 12 20:55:20.979308 kubelet[2514]: E1112 20:55:20.979081 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:20.991452 kubelet[2514]: I1112 20:55:20.990686 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29jxp" podStartSLOduration=1.99063491 podStartE2EDuration="1.99063491s" podCreationTimestamp="2024-11-12 20:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:20.990481802 +0000 UTC m=+6.137112220" watchObservedRunningTime="2024-11-12 20:55:20.99063491 +0000 UTC m=+6.137265328" Nov 12 20:55:22.884489 sudo[1646]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:22.886988 sshd[1643]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:22.892193 systemd[1]: sshd@6-10.0.0.126:22-10.0.0.1:44950.service: Deactivated successfully. Nov 12 20:55:22.894414 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:55:22.894639 systemd[1]: session-7.scope: Consumed 4.947s CPU time, 157.1M memory peak, 0B memory swap peak. Nov 12 20:55:22.895333 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:55:22.896523 systemd-logind[1450]: Removed session 7. Nov 12 20:55:22.918112 kubelet[2514]: E1112 20:55:22.918053 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:22.983599 kubelet[2514]: E1112 20:55:22.983550 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:24.399995 kubelet[2514]: E1112 20:55:24.399932 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:24.986607 kubelet[2514]: E1112 20:55:24.986554 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:25.934410 kubelet[2514]: E1112 20:55:25.934326 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:30.568351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604787876.mount: Deactivated successfully. Nov 12 20:55:31.268938 containerd[1468]: time="2024-11-12T20:55:31.268824460Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:31.327904 containerd[1468]: time="2024-11-12T20:55:31.327815503Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763359" Nov 12 20:55:31.379080 containerd[1468]: time="2024-11-12T20:55:31.379015838Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:31.438570 containerd[1468]: time="2024-11-12T20:55:31.438510950Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:31.439523 containerd[1468]: time="2024-11-12T20:55:31.439465416Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 10.68515111s" Nov 12 20:55:31.439523 containerd[1468]: time="2024-11-12T20:55:31.439519167Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:55:31.442373 containerd[1468]: time="2024-11-12T20:55:31.442329134Z" level=info msg="CreateContainer within sandbox \"fb3eec08c34292827704ff73f058f4898806f8a34a5a65c6cffbc207df4e87a2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:55:31.596370 containerd[1468]: time="2024-11-12T20:55:31.596196841Z" level=info msg="CreateContainer within sandbox \"fb3eec08c34292827704ff73f058f4898806f8a34a5a65c6cffbc207df4e87a2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5e9434686e2c54cf95adbb3cf6243c171f0e8876b30e4fa1a54a5c9ff9c9e14b\"" Nov 12 20:55:31.597043 containerd[1468]: time="2024-11-12T20:55:31.596955809Z" level=info msg="StartContainer for \"5e9434686e2c54cf95adbb3cf6243c171f0e8876b30e4fa1a54a5c9ff9c9e14b\"" Nov 12 20:55:31.624327 systemd[1]: run-containerd-runc-k8s.io-5e9434686e2c54cf95adbb3cf6243c171f0e8876b30e4fa1a54a5c9ff9c9e14b-runc.jX79vz.mount: Deactivated successfully. Nov 12 20:55:31.632050 systemd[1]: Started cri-containerd-5e9434686e2c54cf95adbb3cf6243c171f0e8876b30e4fa1a54a5c9ff9c9e14b.scope - libcontainer container 5e9434686e2c54cf95adbb3cf6243c171f0e8876b30e4fa1a54a5c9ff9c9e14b. Nov 12 20:55:31.761973 containerd[1468]: time="2024-11-12T20:55:31.761896770Z" level=info msg="StartContainer for \"5e9434686e2c54cf95adbb3cf6243c171f0e8876b30e4fa1a54a5c9ff9c9e14b\" returns successfully" Nov 12 20:55:35.351339 kubelet[2514]: I1112 20:55:35.350560 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-f8bc97d4c-wddpd" podStartSLOduration=4.66375906 podStartE2EDuration="15.350536012s" podCreationTimestamp="2024-11-12 20:55:20 +0000 UTC" firstStartedPulling="2024-11-12 20:55:20.753571976 +0000 UTC m=+5.900202394" lastFinishedPulling="2024-11-12 20:55:31.440348928 +0000 UTC m=+16.586979346" observedRunningTime="2024-11-12 20:55:32.132827279 +0000 UTC m=+17.279457697" watchObservedRunningTime="2024-11-12 20:55:35.350536012 +0000 UTC m=+20.497166440" Nov 12 20:55:35.361403 systemd[1]: Created slice kubepods-besteffort-podcec504af_1556_4ca1_98cb_132b96c2dd6d.slice - libcontainer container kubepods-besteffort-podcec504af_1556_4ca1_98cb_132b96c2dd6d.slice. Nov 12 20:55:35.376838 kubelet[2514]: I1112 20:55:35.376689 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cec504af-1556-4ca1-98cb-132b96c2dd6d-typha-certs\") pod \"calico-typha-775b47f464-npfcg\" (UID: \"cec504af-1556-4ca1-98cb-132b96c2dd6d\") " pod="calico-system/calico-typha-775b47f464-npfcg" Nov 12 20:55:35.376838 kubelet[2514]: I1112 20:55:35.376743 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6t2n\" (UniqueName: \"kubernetes.io/projected/cec504af-1556-4ca1-98cb-132b96c2dd6d-kube-api-access-s6t2n\") pod \"calico-typha-775b47f464-npfcg\" (UID: \"cec504af-1556-4ca1-98cb-132b96c2dd6d\") " pod="calico-system/calico-typha-775b47f464-npfcg" Nov 12 20:55:35.376838 kubelet[2514]: I1112 20:55:35.376766 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec504af-1556-4ca1-98cb-132b96c2dd6d-tigera-ca-bundle\") pod \"calico-typha-775b47f464-npfcg\" (UID: \"cec504af-1556-4ca1-98cb-132b96c2dd6d\") " pod="calico-system/calico-typha-775b47f464-npfcg" Nov 12 20:55:35.388083 systemd[1]: Created slice kubepods-besteffort-pod59f36097_2a71_4a1b_96a5_2b7e23984756.slice - libcontainer container kubepods-besteffort-pod59f36097_2a71_4a1b_96a5_2b7e23984756.slice. Nov 12 20:55:35.460740 kubelet[2514]: E1112 20:55:35.460666 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:35.477996 kubelet[2514]: I1112 20:55:35.477937 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/59f36097-2a71-4a1b-96a5-2b7e23984756-cni-bin-dir\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.477996 kubelet[2514]: I1112 20:55:35.477985 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98f557dd-e3c8-4561-ad63-16e2919af7c9-socket-dir\") pod \"csi-node-driver-pl6sb\" (UID: \"98f557dd-e3c8-4561-ad63-16e2919af7c9\") " pod="calico-system/csi-node-driver-pl6sb" Nov 12 20:55:35.477996 kubelet[2514]: I1112 20:55:35.478008 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lbtv\" (UniqueName: \"kubernetes.io/projected/98f557dd-e3c8-4561-ad63-16e2919af7c9-kube-api-access-7lbtv\") pod \"csi-node-driver-pl6sb\" (UID: \"98f557dd-e3c8-4561-ad63-16e2919af7c9\") " pod="calico-system/csi-node-driver-pl6sb" Nov 12 20:55:35.477996 kubelet[2514]: I1112 20:55:35.478027 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/98f557dd-e3c8-4561-ad63-16e2919af7c9-varrun\") pod \"csi-node-driver-pl6sb\" (UID: \"98f557dd-e3c8-4561-ad63-16e2919af7c9\") " pod="calico-system/csi-node-driver-pl6sb" Nov 12 20:55:35.477996 kubelet[2514]: I1112 20:55:35.478055 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59f36097-2a71-4a1b-96a5-2b7e23984756-lib-modules\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478437 kubelet[2514]: I1112 20:55:35.478081 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59f36097-2a71-4a1b-96a5-2b7e23984756-xtables-lock\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478437 kubelet[2514]: I1112 20:55:35.478102 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/59f36097-2a71-4a1b-96a5-2b7e23984756-policysync\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478437 kubelet[2514]: I1112 20:55:35.478120 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/59f36097-2a71-4a1b-96a5-2b7e23984756-node-certs\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478437 kubelet[2514]: I1112 20:55:35.478158 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/59f36097-2a71-4a1b-96a5-2b7e23984756-cni-log-dir\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478437 kubelet[2514]: I1112 20:55:35.478247 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/59f36097-2a71-4a1b-96a5-2b7e23984756-cni-net-dir\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478671 kubelet[2514]: I1112 20:55:35.478265 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qfb\" (UniqueName: \"kubernetes.io/projected/59f36097-2a71-4a1b-96a5-2b7e23984756-kube-api-access-64qfb\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478671 kubelet[2514]: I1112 20:55:35.478366 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98f557dd-e3c8-4561-ad63-16e2919af7c9-kubelet-dir\") pod \"csi-node-driver-pl6sb\" (UID: \"98f557dd-e3c8-4561-ad63-16e2919af7c9\") " pod="calico-system/csi-node-driver-pl6sb" Nov 12 20:55:35.478671 kubelet[2514]: I1112 20:55:35.478446 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/59f36097-2a71-4a1b-96a5-2b7e23984756-var-run-calico\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478671 kubelet[2514]: I1112 20:55:35.478523 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98f557dd-e3c8-4561-ad63-16e2919af7c9-registration-dir\") pod \"csi-node-driver-pl6sb\" (UID: \"98f557dd-e3c8-4561-ad63-16e2919af7c9\") " pod="calico-system/csi-node-driver-pl6sb" Nov 12 20:55:35.478671 kubelet[2514]: I1112 20:55:35.478618 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/59f36097-2a71-4a1b-96a5-2b7e23984756-flexvol-driver-host\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478794 kubelet[2514]: I1112 20:55:35.478635 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/59f36097-2a71-4a1b-96a5-2b7e23984756-var-lib-calico\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.478794 kubelet[2514]: I1112 20:55:35.478745 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59f36097-2a71-4a1b-96a5-2b7e23984756-tigera-ca-bundle\") pod \"calico-node-76j9r\" (UID: \"59f36097-2a71-4a1b-96a5-2b7e23984756\") " pod="calico-system/calico-node-76j9r" Nov 12 20:55:35.581538 kubelet[2514]: E1112 20:55:35.581485 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.581538 kubelet[2514]: W1112 20:55:35.581521 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.581738 kubelet[2514]: E1112 20:55:35.581558 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.582123 kubelet[2514]: E1112 20:55:35.582077 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.582362 kubelet[2514]: W1112 20:55:35.582154 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.582362 kubelet[2514]: E1112 20:55:35.582218 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.582692 kubelet[2514]: E1112 20:55:35.582635 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.582692 kubelet[2514]: W1112 20:55:35.582655 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.582692 kubelet[2514]: E1112 20:55:35.582678 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.583113 kubelet[2514]: E1112 20:55:35.583020 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.583113 kubelet[2514]: W1112 20:55:35.583045 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.583206 kubelet[2514]: E1112 20:55:35.583107 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.583420 kubelet[2514]: E1112 20:55:35.583406 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.583420 kubelet[2514]: W1112 20:55:35.583417 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.585988 kubelet[2514]: E1112 20:55:35.583527 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.585988 kubelet[2514]: E1112 20:55:35.583933 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.585988 kubelet[2514]: W1112 20:55:35.583942 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.585988 kubelet[2514]: E1112 20:55:35.583972 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.585988 kubelet[2514]: E1112 20:55:35.584157 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.585988 kubelet[2514]: W1112 20:55:35.584175 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.585988 kubelet[2514]: E1112 20:55:35.584202 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.592683 kubelet[2514]: E1112 20:55:35.592568 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.592683 kubelet[2514]: W1112 20:55:35.592597 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.592683 kubelet[2514]: E1112 20:55:35.592626 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.593559 kubelet[2514]: E1112 20:55:35.593537 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.593559 kubelet[2514]: W1112 20:55:35.593553 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.593650 kubelet[2514]: E1112 20:55:35.593565 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.599194 kubelet[2514]: E1112 20:55:35.599088 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:35.599194 kubelet[2514]: W1112 20:55:35.599115 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:35.599194 kubelet[2514]: E1112 20:55:35.599137 2514 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:35.696324 kubelet[2514]: E1112 20:55:35.696027 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:35.699812 kubelet[2514]: E1112 20:55:35.699769 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:35.707552 containerd[1468]: time="2024-11-12T20:55:35.707512940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-76j9r,Uid:59f36097-2a71-4a1b-96a5-2b7e23984756,Namespace:calico-system,Attempt:0,}" Nov 12 20:55:35.708042 containerd[1468]: time="2024-11-12T20:55:35.707512349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-775b47f464-npfcg,Uid:cec504af-1556-4ca1-98cb-132b96c2dd6d,Namespace:calico-system,Attempt:0,}" Nov 12 20:55:36.400170 containerd[1468]: time="2024-11-12T20:55:36.399921098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:36.400170 containerd[1468]: time="2024-11-12T20:55:36.400153134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:36.400355 containerd[1468]: time="2024-11-12T20:55:36.400209981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:36.400494 containerd[1468]: time="2024-11-12T20:55:36.400395420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:36.402364 containerd[1468]: time="2024-11-12T20:55:36.401102880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:36.402364 containerd[1468]: time="2024-11-12T20:55:36.401793198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:36.402364 containerd[1468]: time="2024-11-12T20:55:36.401806323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:36.402364 containerd[1468]: time="2024-11-12T20:55:36.401926769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:36.422031 systemd[1]: Started cri-containerd-f257bf07f95c8b4421165b2ba2a40fe730646d53e0a21c9b3ea120b82ff615bf.scope - libcontainer container f257bf07f95c8b4421165b2ba2a40fe730646d53e0a21c9b3ea120b82ff615bf. Nov 12 20:55:36.425514 systemd[1]: Started cri-containerd-bace18c734ba1b9db1cc34950afd10df157f329a4e0ddf7e4ec247896b89ca3c.scope - libcontainer container bace18c734ba1b9db1cc34950afd10df157f329a4e0ddf7e4ec247896b89ca3c. Nov 12 20:55:36.460706 containerd[1468]: time="2024-11-12T20:55:36.460227688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-76j9r,Uid:59f36097-2a71-4a1b-96a5-2b7e23984756,Namespace:calico-system,Attempt:0,} returns sandbox id \"bace18c734ba1b9db1cc34950afd10df157f329a4e0ddf7e4ec247896b89ca3c\"" Nov 12 20:55:36.461510 kubelet[2514]: E1112 20:55:36.461472 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:36.465043 containerd[1468]: time="2024-11-12T20:55:36.464980845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:55:36.470993 containerd[1468]: time="2024-11-12T20:55:36.470703224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-775b47f464-npfcg,Uid:cec504af-1556-4ca1-98cb-132b96c2dd6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f257bf07f95c8b4421165b2ba2a40fe730646d53e0a21c9b3ea120b82ff615bf\"" Nov 12 20:55:36.472084 kubelet[2514]: E1112 20:55:36.471819 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:36.957847 kubelet[2514]: E1112 20:55:36.956323 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:38.956434 kubelet[2514]: E1112 20:55:38.956331 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:39.869767 containerd[1468]: time="2024-11-12T20:55:39.869680230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:39.870580 containerd[1468]: time="2024-11-12T20:55:39.870530860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:55:39.871835 containerd[1468]: time="2024-11-12T20:55:39.871793573Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:39.874504 containerd[1468]: time="2024-11-12T20:55:39.874434166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:39.875000 containerd[1468]: time="2024-11-12T20:55:39.874953051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 3.409901202s" Nov 12 20:55:39.875067 containerd[1468]: time="2024-11-12T20:55:39.875002424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:55:39.876041 containerd[1468]: time="2024-11-12T20:55:39.876012683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:55:39.877437 containerd[1468]: time="2024-11-12T20:55:39.877397636Z" level=info msg="CreateContainer within sandbox \"bace18c734ba1b9db1cc34950afd10df157f329a4e0ddf7e4ec247896b89ca3c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:55:39.899119 containerd[1468]: time="2024-11-12T20:55:39.899038703Z" level=info msg="CreateContainer within sandbox \"bace18c734ba1b9db1cc34950afd10df157f329a4e0ddf7e4ec247896b89ca3c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"55b15b2bc38e8cf747f2c1deb83c0f104e59bcfa733e8689b0da403dc47e7423\"" Nov 12 20:55:39.900032 containerd[1468]: time="2024-11-12T20:55:39.899977397Z" level=info msg="StartContainer for \"55b15b2bc38e8cf747f2c1deb83c0f104e59bcfa733e8689b0da403dc47e7423\"" Nov 12 20:55:39.935104 systemd[1]: Started cri-containerd-55b15b2bc38e8cf747f2c1deb83c0f104e59bcfa733e8689b0da403dc47e7423.scope - libcontainer container 55b15b2bc38e8cf747f2c1deb83c0f104e59bcfa733e8689b0da403dc47e7423. Nov 12 20:55:39.976334 containerd[1468]: time="2024-11-12T20:55:39.976201139Z" level=info msg="StartContainer for \"55b15b2bc38e8cf747f2c1deb83c0f104e59bcfa733e8689b0da403dc47e7423\" returns successfully" Nov 12 20:55:39.987949 systemd[1]: cri-containerd-55b15b2bc38e8cf747f2c1deb83c0f104e59bcfa733e8689b0da403dc47e7423.scope: Deactivated successfully. Nov 12 20:55:40.009774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55b15b2bc38e8cf747f2c1deb83c0f104e59bcfa733e8689b0da403dc47e7423-rootfs.mount: Deactivated successfully. Nov 12 20:55:40.017377 kubelet[2514]: E1112 20:55:40.017335 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:40.346689 containerd[1468]: time="2024-11-12T20:55:40.344263575Z" level=info msg="shim disconnected" id=55b15b2bc38e8cf747f2c1deb83c0f104e59bcfa733e8689b0da403dc47e7423 namespace=k8s.io Nov 12 20:55:40.346689 containerd[1468]: time="2024-11-12T20:55:40.346672923Z" level=warning msg="cleaning up after shim disconnected" id=55b15b2bc38e8cf747f2c1deb83c0f104e59bcfa733e8689b0da403dc47e7423 namespace=k8s.io Nov 12 20:55:40.346689 containerd[1468]: time="2024-11-12T20:55:40.346684575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:40.956330 kubelet[2514]: E1112 20:55:40.956225 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:41.020158 kubelet[2514]: E1112 20:55:41.020116 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:42.956879 kubelet[2514]: E1112 20:55:42.956807 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:44.682598 containerd[1468]: time="2024-11-12T20:55:44.682531655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:44.712299 containerd[1468]: time="2024-11-12T20:55:44.712223781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:55:44.789481 containerd[1468]: time="2024-11-12T20:55:44.789415001Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:44.928185 containerd[1468]: time="2024-11-12T20:55:44.928103344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:44.929042 containerd[1468]: time="2024-11-12T20:55:44.929011009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 5.052969402s" Nov 12 20:55:44.929123 containerd[1468]: time="2024-11-12T20:55:44.929044312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:55:44.930152 containerd[1468]: time="2024-11-12T20:55:44.930128288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:55:44.938281 containerd[1468]: time="2024-11-12T20:55:44.938158395Z" level=info msg="CreateContainer within sandbox \"f257bf07f95c8b4421165b2ba2a40fe730646d53e0a21c9b3ea120b82ff615bf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:55:44.956432 kubelet[2514]: E1112 20:55:44.956376 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:46.956514 kubelet[2514]: E1112 20:55:46.956435 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:47.425893 containerd[1468]: time="2024-11-12T20:55:47.425781501Z" level=info msg="CreateContainer within sandbox \"f257bf07f95c8b4421165b2ba2a40fe730646d53e0a21c9b3ea120b82ff615bf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"92b44a8abb30a45d5792cf7ced9a7c2b1e55559ea5ef51e1e194ad1f7b3c10f9\"" Nov 12 20:55:47.426651 containerd[1468]: time="2024-11-12T20:55:47.426604007Z" level=info msg="StartContainer for \"92b44a8abb30a45d5792cf7ced9a7c2b1e55559ea5ef51e1e194ad1f7b3c10f9\"" Nov 12 20:55:47.469119 systemd[1]: Started cri-containerd-92b44a8abb30a45d5792cf7ced9a7c2b1e55559ea5ef51e1e194ad1f7b3c10f9.scope - libcontainer container 92b44a8abb30a45d5792cf7ced9a7c2b1e55559ea5ef51e1e194ad1f7b3c10f9. Nov 12 20:55:47.875581 containerd[1468]: time="2024-11-12T20:55:47.875504045Z" level=info msg="StartContainer for \"92b44a8abb30a45d5792cf7ced9a7c2b1e55559ea5ef51e1e194ad1f7b3c10f9\" returns successfully" Nov 12 20:55:48.033361 kubelet[2514]: E1112 20:55:48.033162 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:48.156680 kubelet[2514]: I1112 20:55:48.155984 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-775b47f464-npfcg" podStartSLOduration=4.699345056 podStartE2EDuration="13.155963925s" podCreationTimestamp="2024-11-12 20:55:35 +0000 UTC" firstStartedPulling="2024-11-12 20:55:36.473341184 +0000 UTC m=+21.619971602" lastFinishedPulling="2024-11-12 20:55:44.929960043 +0000 UTC m=+30.076590471" observedRunningTime="2024-11-12 20:55:48.155605531 +0000 UTC m=+33.302235949" watchObservedRunningTime="2024-11-12 20:55:48.155963925 +0000 UTC m=+33.302594363" Nov 12 20:55:48.956788 kubelet[2514]: E1112 20:55:48.956741 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:49.034898 kubelet[2514]: I1112 20:55:49.034827 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:49.035422 kubelet[2514]: E1112 20:55:49.035207 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:49.242785 systemd[1]: Started sshd@7-10.0.0.126:22-10.0.0.1:54330.service - OpenSSH per-connection server daemon (10.0.0.1:54330). Nov 12 20:55:49.300503 sshd[3117]: Accepted publickey for core from 10.0.0.1 port 54330 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:49.302749 sshd[3117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:49.308492 systemd-logind[1450]: New session 8 of user core. Nov 12 20:55:49.321051 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:55:49.464043 sshd[3117]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:49.470968 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:55:49.470971 systemd[1]: sshd@7-10.0.0.126:22-10.0.0.1:54330.service: Deactivated successfully. Nov 12 20:55:49.474299 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:55:49.475578 systemd-logind[1450]: Removed session 8. Nov 12 20:55:50.958968 kubelet[2514]: E1112 20:55:50.958890 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:52.957184 kubelet[2514]: E1112 20:55:52.957043 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:53.755580 kubelet[2514]: I1112 20:55:53.755523 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:53.756030 kubelet[2514]: E1112 20:55:53.755976 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:54.044091 kubelet[2514]: E1112 20:55:54.043967 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:54.478138 systemd[1]: Started sshd@8-10.0.0.126:22-10.0.0.1:54332.service - OpenSSH per-connection server daemon (10.0.0.1:54332). Nov 12 20:55:54.517582 sshd[3134]: Accepted publickey for core from 10.0.0.1 port 54332 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:54.519788 sshd[3134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:54.524290 systemd-logind[1450]: New session 9 of user core. Nov 12 20:55:54.531081 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:55:54.684897 sshd[3134]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:54.688063 systemd[1]: sshd@8-10.0.0.126:22-10.0.0.1:54332.service: Deactivated successfully. Nov 12 20:55:54.690319 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:55:54.690667 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:55:54.692288 systemd-logind[1450]: Removed session 9. Nov 12 20:55:54.961259 kubelet[2514]: E1112 20:55:54.961220 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:56.906905 containerd[1468]: time="2024-11-12T20:55:56.906752649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:56.909603 containerd[1468]: time="2024-11-12T20:55:56.909495909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:55:56.912302 containerd[1468]: time="2024-11-12T20:55:56.912225806Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:56.915433 containerd[1468]: time="2024-11-12T20:55:56.915340054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:56.916159 containerd[1468]: time="2024-11-12T20:55:56.916076998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 11.985921057s" Nov 12 20:55:56.916159 containerd[1468]: time="2024-11-12T20:55:56.916121962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:55:56.918480 containerd[1468]: time="2024-11-12T20:55:56.918445755Z" level=info msg="CreateContainer within sandbox \"bace18c734ba1b9db1cc34950afd10df157f329a4e0ddf7e4ec247896b89ca3c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:55:56.956806 kubelet[2514]: E1112 20:55:56.956742 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:57.054849 containerd[1468]: time="2024-11-12T20:55:57.054778218Z" level=info msg="CreateContainer within sandbox \"bace18c734ba1b9db1cc34950afd10df157f329a4e0ddf7e4ec247896b89ca3c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b9c1c5454d5c875b160cffa16bd683a2cf490bfcd4d7a7c3a972f347c5fae919\"" Nov 12 20:55:57.055395 containerd[1468]: time="2024-11-12T20:55:57.055359238Z" level=info msg="StartContainer for \"b9c1c5454d5c875b160cffa16bd683a2cf490bfcd4d7a7c3a972f347c5fae919\"" Nov 12 20:55:57.093087 systemd[1]: Started cri-containerd-b9c1c5454d5c875b160cffa16bd683a2cf490bfcd4d7a7c3a972f347c5fae919.scope - libcontainer container b9c1c5454d5c875b160cffa16bd683a2cf490bfcd4d7a7c3a972f347c5fae919. Nov 12 20:55:58.266190 containerd[1468]: time="2024-11-12T20:55:58.266111610Z" level=info msg="StartContainer for \"b9c1c5454d5c875b160cffa16bd683a2cf490bfcd4d7a7c3a972f347c5fae919\" returns successfully" Nov 12 20:55:58.957114 kubelet[2514]: E1112 20:55:58.956378 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:55:59.271206 kubelet[2514]: E1112 20:55:59.271165 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:59.709311 systemd[1]: Started sshd@9-10.0.0.126:22-10.0.0.1:53934.service - OpenSSH per-connection server daemon (10.0.0.1:53934). Nov 12 20:55:59.903887 sshd[3197]: Accepted publickey for core from 10.0.0.1 port 53934 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:59.906089 sshd[3197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:59.910858 systemd-logind[1450]: New session 10 of user core. Nov 12 20:55:59.918031 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:55:59.967308 systemd[1]: cri-containerd-b9c1c5454d5c875b160cffa16bd683a2cf490bfcd4d7a7c3a972f347c5fae919.scope: Deactivated successfully. Nov 12 20:55:59.992970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9c1c5454d5c875b160cffa16bd683a2cf490bfcd4d7a7c3a972f347c5fae919-rootfs.mount: Deactivated successfully. Nov 12 20:56:00.026838 kubelet[2514]: I1112 20:56:00.026795 2514 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 12 20:56:00.103718 sshd[3197]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:00.108771 systemd[1]: sshd@9-10.0.0.126:22-10.0.0.1:53934.service: Deactivated successfully. Nov 12 20:56:00.110803 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:56:00.111512 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:56:00.112480 systemd-logind[1450]: Removed session 10. Nov 12 20:56:00.154450 containerd[1468]: time="2024-11-12T20:56:00.154388015Z" level=info msg="shim disconnected" id=b9c1c5454d5c875b160cffa16bd683a2cf490bfcd4d7a7c3a972f347c5fae919 namespace=k8s.io Nov 12 20:56:00.154450 containerd[1468]: time="2024-11-12T20:56:00.154446408Z" level=warning msg="cleaning up after shim disconnected" id=b9c1c5454d5c875b160cffa16bd683a2cf490bfcd4d7a7c3a972f347c5fae919 namespace=k8s.io Nov 12 20:56:00.154450 containerd[1468]: time="2024-11-12T20:56:00.154456728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:00.197238 systemd[1]: Created slice kubepods-burstable-pod8786a82f_424d_47e5_a4b8_3f707927ec39.slice - libcontainer container kubepods-burstable-pod8786a82f_424d_47e5_a4b8_3f707927ec39.slice. Nov 12 20:56:00.207087 systemd[1]: Created slice kubepods-besteffort-pod8e40773a_418c_4130_9a71_ae6e65ef1939.slice - libcontainer container kubepods-besteffort-pod8e40773a_418c_4130_9a71_ae6e65ef1939.slice. Nov 12 20:56:00.215620 systemd[1]: Created slice kubepods-besteffort-pod4394043a_88b4_49ad_a98e_6481c4c4b819.slice - libcontainer container kubepods-besteffort-pod4394043a_88b4_49ad_a98e_6481c4c4b819.slice. Nov 12 20:56:00.223090 systemd[1]: Created slice kubepods-burstable-podb0285c38_1cc8_4609_bd2b_a0cdab6d4401.slice - libcontainer container kubepods-burstable-podb0285c38_1cc8_4609_bd2b_a0cdab6d4401.slice. Nov 12 20:56:00.232801 systemd[1]: Created slice kubepods-besteffort-podb8c09238_5c26_48aa_9e7a_e74214863f5a.slice - libcontainer container kubepods-besteffort-podb8c09238_5c26_48aa_9e7a_e74214863f5a.slice. Nov 12 20:56:00.242945 kubelet[2514]: I1112 20:56:00.242876 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8kr2\" (UniqueName: \"kubernetes.io/projected/8e40773a-418c-4130-9a71-ae6e65ef1939-kube-api-access-b8kr2\") pod \"calico-apiserver-5bb9f949c4-t9xw4\" (UID: \"8e40773a-418c-4130-9a71-ae6e65ef1939\") " pod="calico-apiserver/calico-apiserver-5bb9f949c4-t9xw4" Nov 12 20:56:00.243259 kubelet[2514]: I1112 20:56:00.243183 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8786a82f-424d-47e5-a4b8-3f707927ec39-config-volume\") pod \"coredns-6f6b679f8f-hgqb9\" (UID: \"8786a82f-424d-47e5-a4b8-3f707927ec39\") " pod="kube-system/coredns-6f6b679f8f-hgqb9" Nov 12 20:56:00.243259 kubelet[2514]: I1112 20:56:00.243219 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4394043a-88b4-49ad-a98e-6481c4c4b819-calico-apiserver-certs\") pod \"calico-apiserver-5bb9f949c4-wkcll\" (UID: \"4394043a-88b4-49ad-a98e-6481c4c4b819\") " pod="calico-apiserver/calico-apiserver-5bb9f949c4-wkcll" Nov 12 20:56:00.243259 kubelet[2514]: I1112 20:56:00.243244 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8c09238-5c26-48aa-9e7a-e74214863f5a-tigera-ca-bundle\") pod \"calico-kube-controllers-7795f444d4-qgkp8\" (UID: \"b8c09238-5c26-48aa-9e7a-e74214863f5a\") " pod="calico-system/calico-kube-controllers-7795f444d4-qgkp8" Nov 12 20:56:00.243527 kubelet[2514]: I1112 20:56:00.243266 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27qh\" (UniqueName: \"kubernetes.io/projected/8786a82f-424d-47e5-a4b8-3f707927ec39-kube-api-access-t27qh\") pod \"coredns-6f6b679f8f-hgqb9\" (UID: \"8786a82f-424d-47e5-a4b8-3f707927ec39\") " pod="kube-system/coredns-6f6b679f8f-hgqb9" Nov 12 20:56:00.243527 kubelet[2514]: I1112 20:56:00.243290 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0285c38-1cc8-4609-bd2b-a0cdab6d4401-config-volume\") pod \"coredns-6f6b679f8f-z9sfr\" (UID: \"b0285c38-1cc8-4609-bd2b-a0cdab6d4401\") " pod="kube-system/coredns-6f6b679f8f-z9sfr" Nov 12 20:56:00.243527 kubelet[2514]: I1112 20:56:00.243343 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e40773a-418c-4130-9a71-ae6e65ef1939-calico-apiserver-certs\") pod \"calico-apiserver-5bb9f949c4-t9xw4\" (UID: \"8e40773a-418c-4130-9a71-ae6e65ef1939\") " pod="calico-apiserver/calico-apiserver-5bb9f949c4-t9xw4" Nov 12 20:56:00.243527 kubelet[2514]: I1112 20:56:00.243390 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg2ps\" (UniqueName: \"kubernetes.io/projected/b0285c38-1cc8-4609-bd2b-a0cdab6d4401-kube-api-access-bg2ps\") pod \"coredns-6f6b679f8f-z9sfr\" (UID: \"b0285c38-1cc8-4609-bd2b-a0cdab6d4401\") " pod="kube-system/coredns-6f6b679f8f-z9sfr" Nov 12 20:56:00.243527 kubelet[2514]: I1112 20:56:00.243417 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swcbl\" (UniqueName: \"kubernetes.io/projected/b8c09238-5c26-48aa-9e7a-e74214863f5a-kube-api-access-swcbl\") pod \"calico-kube-controllers-7795f444d4-qgkp8\" (UID: \"b8c09238-5c26-48aa-9e7a-e74214863f5a\") " pod="calico-system/calico-kube-controllers-7795f444d4-qgkp8" Nov 12 20:56:00.243752 kubelet[2514]: I1112 20:56:00.243451 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7d7m\" (UniqueName: \"kubernetes.io/projected/4394043a-88b4-49ad-a98e-6481c4c4b819-kube-api-access-n7d7m\") pod \"calico-apiserver-5bb9f949c4-wkcll\" (UID: \"4394043a-88b4-49ad-a98e-6481c4c4b819\") " pod="calico-apiserver/calico-apiserver-5bb9f949c4-wkcll" Nov 12 20:56:00.275443 kubelet[2514]: E1112 20:56:00.275376 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:00.276673 containerd[1468]: time="2024-11-12T20:56:00.276562153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:56:00.503176 kubelet[2514]: E1112 20:56:00.503005 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:00.504239 containerd[1468]: time="2024-11-12T20:56:00.504196224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hgqb9,Uid:8786a82f-424d-47e5-a4b8-3f707927ec39,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:00.511998 containerd[1468]: time="2024-11-12T20:56:00.511931750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9f949c4-t9xw4,Uid:8e40773a-418c-4130-9a71-ae6e65ef1939,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:56:00.520991 containerd[1468]: time="2024-11-12T20:56:00.520943232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9f949c4-wkcll,Uid:4394043a-88b4-49ad-a98e-6481c4c4b819,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:56:00.529299 kubelet[2514]: E1112 20:56:00.529251 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:00.530100 containerd[1468]: time="2024-11-12T20:56:00.529848128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z9sfr,Uid:b0285c38-1cc8-4609-bd2b-a0cdab6d4401,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:00.536823 containerd[1468]: time="2024-11-12T20:56:00.536798396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7795f444d4-qgkp8,Uid:b8c09238-5c26-48aa-9e7a-e74214863f5a,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:00.687017 containerd[1468]: time="2024-11-12T20:56:00.686375414Z" level=error msg="Failed to destroy network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.688847 containerd[1468]: time="2024-11-12T20:56:00.688500483Z" level=error msg="encountered an error cleaning up failed sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.688847 containerd[1468]: time="2024-11-12T20:56:00.688558535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z9sfr,Uid:b0285c38-1cc8-4609-bd2b-a0cdab6d4401,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.703280 containerd[1468]: time="2024-11-12T20:56:00.703155686Z" level=error msg="Failed to destroy network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.703473 kubelet[2514]: E1112 20:56:00.703420 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.703593 kubelet[2514]: E1112 20:56:00.703534 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-z9sfr" Nov 12 20:56:00.703593 kubelet[2514]: E1112 20:56:00.703566 2514 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-z9sfr" Nov 12 20:56:00.703728 containerd[1468]: time="2024-11-12T20:56:00.703564728Z" level=error msg="encountered an error cleaning up failed sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.703728 containerd[1468]: time="2024-11-12T20:56:00.703613221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9f949c4-wkcll,Uid:4394043a-88b4-49ad-a98e-6481c4c4b819,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.703945 kubelet[2514]: E1112 20:56:00.703623 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-z9sfr_kube-system(b0285c38-1cc8-4609-bd2b-a0cdab6d4401)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-z9sfr_kube-system(b0285c38-1cc8-4609-bd2b-a0cdab6d4401)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-z9sfr" podUID="b0285c38-1cc8-4609-bd2b-a0cdab6d4401" Nov 12 20:56:00.704083 kubelet[2514]: E1112 20:56:00.703953 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.704083 kubelet[2514]: E1112 20:56:00.703998 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb9f949c4-wkcll" Nov 12 20:56:00.704083 kubelet[2514]: E1112 20:56:00.704021 2514 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb9f949c4-wkcll" Nov 12 20:56:00.704244 kubelet[2514]: E1112 20:56:00.704056 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bb9f949c4-wkcll_calico-apiserver(4394043a-88b4-49ad-a98e-6481c4c4b819)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bb9f949c4-wkcll_calico-apiserver(4394043a-88b4-49ad-a98e-6481c4c4b819)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb9f949c4-wkcll" podUID="4394043a-88b4-49ad-a98e-6481c4c4b819" Nov 12 20:56:00.711500 containerd[1468]: time="2024-11-12T20:56:00.711437919Z" level=error msg="Failed to destroy network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.712714 containerd[1468]: time="2024-11-12T20:56:00.712353169Z" level=error msg="encountered an error cleaning up failed sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.712714 containerd[1468]: time="2024-11-12T20:56:00.712432832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9f949c4-t9xw4,Uid:8e40773a-418c-4130-9a71-ae6e65ef1939,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.713164 kubelet[2514]: E1112 20:56:00.712927 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.713164 kubelet[2514]: E1112 20:56:00.713039 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb9f949c4-t9xw4" Nov 12 20:56:00.713164 kubelet[2514]: E1112 20:56:00.713087 2514 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb9f949c4-t9xw4" Nov 12 20:56:00.713447 kubelet[2514]: E1112 20:56:00.713368 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bb9f949c4-t9xw4_calico-apiserver(8e40773a-418c-4130-9a71-ae6e65ef1939)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bb9f949c4-t9xw4_calico-apiserver(8e40773a-418c-4130-9a71-ae6e65ef1939)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb9f949c4-t9xw4" podUID="8e40773a-418c-4130-9a71-ae6e65ef1939" Nov 12 20:56:00.716124 containerd[1468]: time="2024-11-12T20:56:00.715991423Z" level=error msg="Failed to destroy network for sandbox \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.716516 containerd[1468]: time="2024-11-12T20:56:00.716472472Z" level=error msg="encountered an error cleaning up failed sandbox \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.716623 containerd[1468]: time="2024-11-12T20:56:00.716531788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7795f444d4-qgkp8,Uid:b8c09238-5c26-48aa-9e7a-e74214863f5a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.717446 kubelet[2514]: E1112 20:56:00.717005 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.717446 kubelet[2514]: E1112 20:56:00.717083 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7795f444d4-qgkp8" Nov 12 20:56:00.717446 kubelet[2514]: E1112 20:56:00.717111 2514 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7795f444d4-qgkp8" Nov 12 20:56:00.717589 kubelet[2514]: E1112 20:56:00.717157 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7795f444d4-qgkp8_calico-system(b8c09238-5c26-48aa-9e7a-e74214863f5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7795f444d4-qgkp8_calico-system(b8c09238-5c26-48aa-9e7a-e74214863f5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7795f444d4-qgkp8" podUID="b8c09238-5c26-48aa-9e7a-e74214863f5a" Nov 12 20:56:00.717919 containerd[1468]: time="2024-11-12T20:56:00.717838805Z" level=error msg="Failed to destroy network for sandbox \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.718578 containerd[1468]: time="2024-11-12T20:56:00.718452020Z" level=error msg="encountered an error cleaning up failed sandbox \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.718578 containerd[1468]: time="2024-11-12T20:56:00.718522366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hgqb9,Uid:8786a82f-424d-47e5-a4b8-3f707927ec39,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.718938 kubelet[2514]: E1112 20:56:00.718888 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:00.719006 kubelet[2514]: E1112 20:56:00.718962 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hgqb9" Nov 12 20:56:00.719006 kubelet[2514]: E1112 20:56:00.718985 2514 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hgqb9" Nov 12 20:56:00.719077 kubelet[2514]: E1112 20:56:00.719032 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hgqb9_kube-system(8786a82f-424d-47e5-a4b8-3f707927ec39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hgqb9_kube-system(8786a82f-424d-47e5-a4b8-3f707927ec39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hgqb9" podUID="8786a82f-424d-47e5-a4b8-3f707927ec39" Nov 12 20:56:00.963387 systemd[1]: Created slice kubepods-besteffort-pod98f557dd_e3c8_4561_ad63_16e2919af7c9.slice - libcontainer container kubepods-besteffort-pod98f557dd_e3c8_4561_ad63_16e2919af7c9.slice. Nov 12 20:56:00.965666 containerd[1468]: time="2024-11-12T20:56:00.965610268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pl6sb,Uid:98f557dd-e3c8-4561-ad63-16e2919af7c9,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:01.277296 kubelet[2514]: I1112 20:56:01.277171 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" Nov 12 20:56:01.277988 containerd[1468]: time="2024-11-12T20:56:01.277951634Z" level=info msg="StopPodSandbox for \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\"" Nov 12 20:56:01.278287 containerd[1468]: time="2024-11-12T20:56:01.278176178Z" level=info msg="Ensure that sandbox 7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6 in task-service has been cleanup successfully" Nov 12 20:56:01.279027 kubelet[2514]: I1112 20:56:01.279001 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:01.279463 containerd[1468]: time="2024-11-12T20:56:01.279437886Z" level=info msg="StopPodSandbox for \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\"" Nov 12 20:56:01.279640 containerd[1468]: time="2024-11-12T20:56:01.279608314Z" level=info msg="Ensure that sandbox d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe in task-service has been cleanup successfully" Nov 12 20:56:01.280563 kubelet[2514]: I1112 20:56:01.280542 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:01.281265 containerd[1468]: time="2024-11-12T20:56:01.280947952Z" level=info msg="StopPodSandbox for \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\"" Nov 12 20:56:01.281265 containerd[1468]: time="2024-11-12T20:56:01.281077853Z" level=info msg="Ensure that sandbox c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b in task-service has been cleanup successfully" Nov 12 20:56:01.281873 kubelet[2514]: I1112 20:56:01.281811 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:01.283678 containerd[1468]: time="2024-11-12T20:56:01.283625806Z" level=info msg="StopPodSandbox for \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\"" Nov 12 20:56:01.283896 containerd[1468]: time="2024-11-12T20:56:01.283857162Z" level=info msg="Ensure that sandbox c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33 in task-service has been cleanup successfully" Nov 12 20:56:01.285237 kubelet[2514]: I1112 20:56:01.285211 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" Nov 12 20:56:01.285765 containerd[1468]: time="2024-11-12T20:56:01.285736974Z" level=info msg="StopPodSandbox for \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\"" Nov 12 20:56:01.285991 containerd[1468]: time="2024-11-12T20:56:01.285965496Z" level=info msg="Ensure that sandbox 20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55 in task-service has been cleanup successfully" Nov 12 20:56:01.315138 containerd[1468]: time="2024-11-12T20:56:01.315009183Z" level=error msg="StopPodSandbox for \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\" failed" error="failed to destroy network for sandbox \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:01.315412 kubelet[2514]: E1112 20:56:01.315295 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" Nov 12 20:56:01.315412 kubelet[2514]: E1112 20:56:01.315367 2514 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6"} Nov 12 20:56:01.315517 kubelet[2514]: E1112 20:56:01.315433 2514 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8c09238-5c26-48aa-9e7a-e74214863f5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:01.315517 kubelet[2514]: E1112 20:56:01.315456 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8c09238-5c26-48aa-9e7a-e74214863f5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7795f444d4-qgkp8" podUID="b8c09238-5c26-48aa-9e7a-e74214863f5a" Nov 12 20:56:01.331923 containerd[1468]: time="2024-11-12T20:56:01.331017146Z" level=error msg="StopPodSandbox for \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\" failed" error="failed to destroy network for sandbox \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:01.332085 kubelet[2514]: E1112 20:56:01.331258 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" Nov 12 20:56:01.332085 kubelet[2514]: E1112 20:56:01.331384 2514 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55"} Nov 12 20:56:01.332085 kubelet[2514]: E1112 20:56:01.331496 2514 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8786a82f-424d-47e5-a4b8-3f707927ec39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:01.332085 kubelet[2514]: E1112 20:56:01.331533 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8786a82f-424d-47e5-a4b8-3f707927ec39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hgqb9" podUID="8786a82f-424d-47e5-a4b8-3f707927ec39" Nov 12 20:56:01.334957 containerd[1468]: time="2024-11-12T20:56:01.334604847Z" level=error msg="StopPodSandbox for \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\" failed" error="failed to destroy network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:01.335073 kubelet[2514]: E1112 20:56:01.334982 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:01.335163 kubelet[2514]: E1112 20:56:01.335031 2514 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe"} Nov 12 20:56:01.335217 kubelet[2514]: E1112 20:56:01.335176 2514 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0285c38-1cc8-4609-bd2b-a0cdab6d4401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:01.335299 kubelet[2514]: E1112 20:56:01.335251 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0285c38-1cc8-4609-bd2b-a0cdab6d4401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-z9sfr" podUID="b0285c38-1cc8-4609-bd2b-a0cdab6d4401" Nov 12 20:56:01.340967 containerd[1468]: time="2024-11-12T20:56:01.340910448Z" level=error msg="StopPodSandbox for \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\" failed" error="failed to destroy network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:01.341202 containerd[1468]: time="2024-11-12T20:56:01.341170530Z" level=error msg="StopPodSandbox for \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\" failed" error="failed to destroy network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:01.341262 kubelet[2514]: E1112 20:56:01.341176 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:01.341262 kubelet[2514]: E1112 20:56:01.341242 2514 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b"} Nov 12 20:56:01.341339 kubelet[2514]: E1112 20:56:01.341277 2514 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4394043a-88b4-49ad-a98e-6481c4c4b819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:01.341339 kubelet[2514]: E1112 20:56:01.341300 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4394043a-88b4-49ad-a98e-6481c4c4b819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb9f949c4-wkcll" podUID="4394043a-88b4-49ad-a98e-6481c4c4b819" Nov 12 20:56:01.341534 kubelet[2514]: E1112 20:56:01.341502 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:01.341610 kubelet[2514]: E1112 20:56:01.341538 2514 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33"} Nov 12 20:56:01.341610 kubelet[2514]: E1112 20:56:01.341568 2514 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e40773a-418c-4130-9a71-ae6e65ef1939\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:01.341610 kubelet[2514]: E1112 20:56:01.341593 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e40773a-418c-4130-9a71-ae6e65ef1939\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb9f949c4-t9xw4" podUID="8e40773a-418c-4130-9a71-ae6e65ef1939" Nov 12 20:56:01.799731 containerd[1468]: time="2024-11-12T20:56:01.799642065Z" level=error msg="Failed to destroy network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:01.800191 containerd[1468]: time="2024-11-12T20:56:01.800156519Z" level=error msg="encountered an error cleaning up failed sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:01.800262 containerd[1468]: time="2024-11-12T20:56:01.800221825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pl6sb,Uid:98f557dd-e3c8-4561-ad63-16e2919af7c9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:01.800535 kubelet[2514]: E1112 20:56:01.800485 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:01.800610 kubelet[2514]: E1112 20:56:01.800561 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pl6sb" Nov 12 20:56:01.800610 kubelet[2514]: E1112 20:56:01.800586 2514 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pl6sb" Nov 12 20:56:01.800671 kubelet[2514]: E1112 20:56:01.800630 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pl6sb_calico-system(98f557dd-e3c8-4561-ad63-16e2919af7c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pl6sb_calico-system(98f557dd-e3c8-4561-ad63-16e2919af7c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:56:01.802462 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27-shm.mount: Deactivated successfully. Nov 12 20:56:02.294599 kubelet[2514]: I1112 20:56:02.294559 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:02.296538 containerd[1468]: time="2024-11-12T20:56:02.295670614Z" level=info msg="StopPodSandbox for \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\"" Nov 12 20:56:02.296538 containerd[1468]: time="2024-11-12T20:56:02.295923341Z" level=info msg="Ensure that sandbox 00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27 in task-service has been cleanup successfully" Nov 12 20:56:02.369102 containerd[1468]: time="2024-11-12T20:56:02.369021251Z" level=error msg="StopPodSandbox for \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\" failed" error="failed to destroy network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:02.369373 kubelet[2514]: E1112 20:56:02.369329 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:02.369428 kubelet[2514]: E1112 20:56:02.369393 2514 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27"} Nov 12 20:56:02.369454 kubelet[2514]: E1112 20:56:02.369428 2514 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98f557dd-e3c8-4561-ad63-16e2919af7c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:02.369532 kubelet[2514]: E1112 20:56:02.369451 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98f557dd-e3c8-4561-ad63-16e2919af7c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pl6sb" podUID="98f557dd-e3c8-4561-ad63-16e2919af7c9" Nov 12 20:56:05.115685 systemd[1]: Started sshd@10-10.0.0.126:22-10.0.0.1:53944.service - OpenSSH per-connection server daemon (10.0.0.1:53944). Nov 12 20:56:05.168967 sshd[3606]: Accepted publickey for core from 10.0.0.1 port 53944 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:05.171464 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:05.176688 systemd-logind[1450]: New session 11 of user core. Nov 12 20:56:05.183105 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:56:05.339654 sshd[3606]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:05.345817 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:56:05.346588 systemd[1]: sshd@10-10.0.0.126:22-10.0.0.1:53944.service: Deactivated successfully. Nov 12 20:56:05.349738 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:56:05.351042 systemd-logind[1450]: Removed session 11. Nov 12 20:56:08.654163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831681344.mount: Deactivated successfully. Nov 12 20:56:09.904117 containerd[1468]: time="2024-11-12T20:56:09.904022858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:09.906316 containerd[1468]: time="2024-11-12T20:56:09.906245167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:56:09.909442 containerd[1468]: time="2024-11-12T20:56:09.909384377Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:09.924236 containerd[1468]: time="2024-11-12T20:56:09.924171561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:09.924845 containerd[1468]: time="2024-11-12T20:56:09.924805800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 9.648149705s" Nov 12 20:56:09.924942 containerd[1468]: time="2024-11-12T20:56:09.924844735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:56:09.933973 containerd[1468]: time="2024-11-12T20:56:09.933926870Z" level=info msg="CreateContainer within sandbox \"bace18c734ba1b9db1cc34950afd10df157f329a4e0ddf7e4ec247896b89ca3c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:56:09.991484 containerd[1468]: time="2024-11-12T20:56:09.991414947Z" level=info msg="CreateContainer within sandbox \"bace18c734ba1b9db1cc34950afd10df157f329a4e0ddf7e4ec247896b89ca3c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0f86f67271e34fd3b5dcd8281d1df7b006dc350af0b6e0ede0fc243cb453cf09\"" Nov 12 20:56:09.994477 containerd[1468]: time="2024-11-12T20:56:09.992661652Z" level=info msg="StartContainer for \"0f86f67271e34fd3b5dcd8281d1df7b006dc350af0b6e0ede0fc243cb453cf09\"" Nov 12 20:56:10.072119 systemd[1]: Started cri-containerd-0f86f67271e34fd3b5dcd8281d1df7b006dc350af0b6e0ede0fc243cb453cf09.scope - libcontainer container 0f86f67271e34fd3b5dcd8281d1df7b006dc350af0b6e0ede0fc243cb453cf09. Nov 12 20:56:10.174470 containerd[1468]: time="2024-11-12T20:56:10.174194659Z" level=info msg="StartContainer for \"0f86f67271e34fd3b5dcd8281d1df7b006dc350af0b6e0ede0fc243cb453cf09\" returns successfully" Nov 12 20:56:10.210314 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:56:10.211174 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:56:10.316724 kubelet[2514]: E1112 20:56:10.316679 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:10.366467 systemd[1]: Started sshd@11-10.0.0.126:22-10.0.0.1:45944.service - OpenSSH per-connection server daemon (10.0.0.1:45944). Nov 12 20:56:10.401126 sshd[3679]: Accepted publickey for core from 10.0.0.1 port 45944 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:10.402950 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:10.407976 systemd-logind[1450]: New session 12 of user core. Nov 12 20:56:10.414015 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:56:10.551382 kubelet[2514]: I1112 20:56:10.551192 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-76j9r" podStartSLOduration=2.089827977 podStartE2EDuration="35.551137551s" podCreationTimestamp="2024-11-12 20:55:35 +0000 UTC" firstStartedPulling="2024-11-12 20:55:36.464455448 +0000 UTC m=+21.611085866" lastFinishedPulling="2024-11-12 20:56:09.925765022 +0000 UTC m=+55.072395440" observedRunningTime="2024-11-12 20:56:10.549927168 +0000 UTC m=+55.696557586" watchObservedRunningTime="2024-11-12 20:56:10.551137551 +0000 UTC m=+55.697767969" Nov 12 20:56:10.562792 sshd[3679]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:10.573518 systemd[1]: sshd@11-10.0.0.126:22-10.0.0.1:45944.service: Deactivated successfully. Nov 12 20:56:10.575652 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:56:10.578371 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:56:10.587894 systemd[1]: Started sshd@12-10.0.0.126:22-10.0.0.1:45956.service - OpenSSH per-connection server daemon (10.0.0.1:45956). Nov 12 20:56:10.588680 systemd-logind[1450]: Removed session 12. Nov 12 20:56:10.619333 sshd[3716]: Accepted publickey for core from 10.0.0.1 port 45956 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:10.620911 sshd[3716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:10.624953 systemd-logind[1450]: New session 13 of user core. Nov 12 20:56:10.633132 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:56:10.835560 sshd[3716]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:10.849332 systemd[1]: sshd@12-10.0.0.126:22-10.0.0.1:45956.service: Deactivated successfully. Nov 12 20:56:10.853476 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:56:10.855693 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:56:10.860524 systemd-logind[1450]: Removed session 13. Nov 12 20:56:10.869265 systemd[1]: Started sshd@13-10.0.0.126:22-10.0.0.1:45966.service - OpenSSH per-connection server daemon (10.0.0.1:45966). Nov 12 20:56:10.907366 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 45966 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:10.909944 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:10.915595 systemd-logind[1450]: New session 14 of user core. Nov 12 20:56:10.926059 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:56:11.052768 sshd[3728]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:11.057418 systemd[1]: sshd@13-10.0.0.126:22-10.0.0.1:45966.service: Deactivated successfully. Nov 12 20:56:11.060013 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:56:11.060814 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:56:11.062057 systemd-logind[1450]: Removed session 14. Nov 12 20:56:11.318674 kubelet[2514]: E1112 20:56:11.318633 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:11.958142 containerd[1468]: time="2024-11-12T20:56:11.957616529Z" level=info msg="StopPodSandbox for \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\"" Nov 12 20:56:11.958967 containerd[1468]: time="2024-11-12T20:56:11.958930550Z" level=info msg="StopPodSandbox for \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\"" Nov 12 20:56:12.320507 kubelet[2514]: E1112 20:56:12.320471 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.255 [INFO][3891] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.256 [INFO][3891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" iface="eth0" netns="/var/run/netns/cni-96e0bd02-b80d-62ec-925c-54aadbefa400" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.256 [INFO][3891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" iface="eth0" netns="/var/run/netns/cni-96e0bd02-b80d-62ec-925c-54aadbefa400" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.257 [INFO][3891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" iface="eth0" netns="/var/run/netns/cni-96e0bd02-b80d-62ec-925c-54aadbefa400" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.257 [INFO][3891] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.257 [INFO][3891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.310 [INFO][3939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" HandleID="k8s-pod-network.c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.311 [INFO][3939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.311 [INFO][3939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.541 [WARNING][3939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" HandleID="k8s-pod-network.c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.541 [INFO][3939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" HandleID="k8s-pod-network.c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.566 [INFO][3939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:12.573871 containerd[1468]: 2024-11-12 20:56:12.569 [INFO][3891] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:12.577062 containerd[1468]: time="2024-11-12T20:56:12.577022269Z" level=info msg="TearDown network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\" successfully" Nov 12 20:56:12.577106 containerd[1468]: time="2024-11-12T20:56:12.577062517Z" level=info msg="StopPodSandbox for \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\" returns successfully" Nov 12 20:56:12.577234 systemd[1]: run-netns-cni\x2d96e0bd02\x2db80d\x2d62ec\x2d925c\x2d54aadbefa400.mount: Deactivated successfully. Nov 12 20:56:12.577966 containerd[1468]: time="2024-11-12T20:56:12.577928678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9f949c4-t9xw4,Uid:8e40773a-418c-4130-9a71-ae6e65ef1939,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.539 [INFO][3901] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.539 [INFO][3901] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" iface="eth0" netns="/var/run/netns/cni-e16e2a79-e0ed-e7ae-ce0b-b87df3434c00" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.540 [INFO][3901] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" iface="eth0" netns="/var/run/netns/cni-e16e2a79-e0ed-e7ae-ce0b-b87df3434c00" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.541 [INFO][3901] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" iface="eth0" netns="/var/run/netns/cni-e16e2a79-e0ed-e7ae-ce0b-b87df3434c00" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.541 [INFO][3901] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.541 [INFO][3901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.564 [INFO][3968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" HandleID="k8s-pod-network.d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.564 [INFO][3968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.566 [INFO][3968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.633 [WARNING][3968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" HandleID="k8s-pod-network.d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.633 [INFO][3968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" HandleID="k8s-pod-network.d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.634 [INFO][3968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:12.641303 containerd[1468]: 2024-11-12 20:56:12.638 [INFO][3901] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:12.641780 containerd[1468]: time="2024-11-12T20:56:12.641453081Z" level=info msg="TearDown network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\" successfully" Nov 12 20:56:12.641780 containerd[1468]: time="2024-11-12T20:56:12.641479522Z" level=info msg="StopPodSandbox for \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\" returns successfully" Nov 12 20:56:12.642499 kubelet[2514]: E1112 20:56:12.642190 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:12.643269 containerd[1468]: time="2024-11-12T20:56:12.643068259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z9sfr,Uid:b0285c38-1cc8-4609-bd2b-a0cdab6d4401,Namespace:kube-system,Attempt:1,}" Nov 12 20:56:12.644305 systemd[1]: run-netns-cni\x2de16e2a79\x2de0ed\x2de7ae\x2dce0b\x2db87df3434c00.mount: Deactivated successfully. Nov 12 20:56:12.658897 kernel: bpftool[3983]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:56:12.984145 systemd-networkd[1397]: vxlan.calico: Link UP Nov 12 20:56:12.984155 systemd-networkd[1397]: vxlan.calico: Gained carrier Nov 12 20:56:13.485729 systemd-networkd[1397]: cali860f8793486: Link UP Nov 12 20:56:13.486347 systemd-networkd[1397]: cali860f8793486: Gained carrier Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.257 [INFO][4028] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0 calico-apiserver-5bb9f949c4- calico-apiserver 8e40773a-418c-4130-9a71-ae6e65ef1939 947 0 2024-11-12 20:55:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bb9f949c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bb9f949c4-t9xw4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali860f8793486 [] []}} ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-t9xw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.258 [INFO][4028] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-t9xw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.393 [INFO][4049] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" HandleID="k8s-pod-network.8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.434 [INFO][4049] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" HandleID="k8s-pod-network.8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bb9f949c4-t9xw4", "timestamp":"2024-11-12 20:56:13.393887836 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.435 [INFO][4049] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.435 [INFO][4049] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.435 [INFO][4049] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.440 [INFO][4049] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" host="localhost" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.455 [INFO][4049] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.460 [INFO][4049] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.462 [INFO][4049] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.464 [INFO][4049] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.464 [INFO][4049] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" host="localhost" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.465 [INFO][4049] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.469 [INFO][4049] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" host="localhost" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.474 [INFO][4049] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" host="localhost" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.474 [INFO][4049] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" host="localhost" Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.474 [INFO][4049] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:13.500401 containerd[1468]: 2024-11-12 20:56:13.474 [INFO][4049] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" HandleID="k8s-pod-network.8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:13.501340 containerd[1468]: 2024-11-12 20:56:13.480 [INFO][4028] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-t9xw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0", GenerateName:"calico-apiserver-5bb9f949c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e40773a-418c-4130-9a71-ae6e65ef1939", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9f949c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bb9f949c4-t9xw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali860f8793486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:13.501340 containerd[1468]: 2024-11-12 20:56:13.480 [INFO][4028] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-t9xw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:13.501340 containerd[1468]: 2024-11-12 20:56:13.480 [INFO][4028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali860f8793486 ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-t9xw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:13.501340 containerd[1468]: 2024-11-12 20:56:13.486 [INFO][4028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-t9xw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:13.501340 containerd[1468]: 2024-11-12 20:56:13.487 [INFO][4028] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-t9xw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0", GenerateName:"calico-apiserver-5bb9f949c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e40773a-418c-4130-9a71-ae6e65ef1939", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9f949c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e", Pod:"calico-apiserver-5bb9f949c4-t9xw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali860f8793486", MAC:"66:0d:45:08:e1:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:13.501340 containerd[1468]: 2024-11-12 20:56:13.496 [INFO][4028] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-t9xw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:13.733488 containerd[1468]: time="2024-11-12T20:56:13.733389083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:13.733648 containerd[1468]: time="2024-11-12T20:56:13.733463947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:13.733648 containerd[1468]: time="2024-11-12T20:56:13.733482913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:13.733648 containerd[1468]: time="2024-11-12T20:56:13.733579709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:13.734841 systemd-networkd[1397]: cali51715fbb905: Link UP Nov 12 20:56:13.735168 systemd-networkd[1397]: cali51715fbb905: Gained carrier Nov 12 20:56:13.768028 systemd[1]: Started cri-containerd-8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e.scope - libcontainer container 8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e. Nov 12 20:56:13.780432 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:56:13.805625 containerd[1468]: time="2024-11-12T20:56:13.805580038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9f949c4-t9xw4,Uid:8e40773a-418c-4130-9a71-ae6e65ef1939,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e\"" Nov 12 20:56:13.807179 containerd[1468]: time="2024-11-12T20:56:13.807145548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.474 [INFO][4074] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0 coredns-6f6b679f8f- kube-system b0285c38-1cc8-4609-bd2b-a0cdab6d4401 948 0 2024-11-12 20:55:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-z9sfr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali51715fbb905 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Namespace="kube-system" Pod="coredns-6f6b679f8f-z9sfr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--z9sfr-" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.475 [INFO][4074] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Namespace="kube-system" Pod="coredns-6f6b679f8f-z9sfr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.515 [INFO][4092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" HandleID="k8s-pod-network.593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.523 [INFO][4092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" HandleID="k8s-pod-network.593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d640), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-z9sfr", "timestamp":"2024-11-12 20:56:13.51527627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.523 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.523 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.523 [INFO][4092] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.539 [INFO][4092] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" host="localhost" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.543 [INFO][4092] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.574 [INFO][4092] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.576 [INFO][4092] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.578 [INFO][4092] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.578 [INFO][4092] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" host="localhost" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.580 [INFO][4092] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122 Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.701 [INFO][4092] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" host="localhost" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.727 [INFO][4092] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" host="localhost" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.727 [INFO][4092] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" host="localhost" Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.727 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:13.851434 containerd[1468]: 2024-11-12 20:56:13.727 [INFO][4092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" HandleID="k8s-pod-network.593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:13.852122 containerd[1468]: 2024-11-12 20:56:13.731 [INFO][4074] cni-plugin/k8s.go 386: Populated endpoint ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Namespace="kube-system" Pod="coredns-6f6b679f8f-z9sfr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b0285c38-1cc8-4609-bd2b-a0cdab6d4401", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-z9sfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51715fbb905", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:13.852122 containerd[1468]: 2024-11-12 20:56:13.731 [INFO][4074] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Namespace="kube-system" Pod="coredns-6f6b679f8f-z9sfr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:13.852122 containerd[1468]: 2024-11-12 20:56:13.731 [INFO][4074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51715fbb905 ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Namespace="kube-system" Pod="coredns-6f6b679f8f-z9sfr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:13.852122 containerd[1468]: 2024-11-12 20:56:13.737 [INFO][4074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Namespace="kube-system" Pod="coredns-6f6b679f8f-z9sfr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:13.852122 containerd[1468]: 2024-11-12 20:56:13.737 [INFO][4074] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Namespace="kube-system" Pod="coredns-6f6b679f8f-z9sfr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b0285c38-1cc8-4609-bd2b-a0cdab6d4401", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122", Pod:"coredns-6f6b679f8f-z9sfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51715fbb905", MAC:"16:cb:c9:c1:50:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:13.852122 containerd[1468]: 2024-11-12 20:56:13.848 [INFO][4074] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122" Namespace="kube-system" Pod="coredns-6f6b679f8f-z9sfr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:13.876740 containerd[1468]: time="2024-11-12T20:56:13.876598455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:13.876740 containerd[1468]: time="2024-11-12T20:56:13.876666354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:13.877439 containerd[1468]: time="2024-11-12T20:56:13.877382297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:13.877546 containerd[1468]: time="2024-11-12T20:56:13.877490544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:13.908151 systemd[1]: Started cri-containerd-593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122.scope - libcontainer container 593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122. Nov 12 20:56:13.922349 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:56:13.950145 containerd[1468]: time="2024-11-12T20:56:13.950087987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z9sfr,Uid:b0285c38-1cc8-4609-bd2b-a0cdab6d4401,Namespace:kube-system,Attempt:1,} returns sandbox id \"593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122\"" Nov 12 20:56:13.950934 kubelet[2514]: E1112 20:56:13.950853 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:13.952854 containerd[1468]: time="2024-11-12T20:56:13.952778112Z" level=info msg="CreateContainer within sandbox \"593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:56:13.956676 containerd[1468]: time="2024-11-12T20:56:13.956626908Z" level=info msg="StopPodSandbox for \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\"" Nov 12 20:56:13.956750 containerd[1468]: time="2024-11-12T20:56:13.956677014Z" level=info msg="StopPodSandbox for \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\"" Nov 12 20:56:14.175276 containerd[1468]: time="2024-11-12T20:56:14.174687461Z" level=info msg="CreateContainer within sandbox \"593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3517e574de8cedc590b48a4dc72d89fecd8b6aae5c63658d468648c64218c6c\"" Nov 12 20:56:14.176271 containerd[1468]: time="2024-11-12T20:56:14.176228042Z" level=info msg="StartContainer for \"a3517e574de8cedc590b48a4dc72d89fecd8b6aae5c63658d468648c64218c6c\"" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.162 [INFO][4240] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.163 [INFO][4240] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" iface="eth0" netns="/var/run/netns/cni-d47e6b6b-663b-549d-56b6-5a96af29d3d3" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.163 [INFO][4240] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" iface="eth0" netns="/var/run/netns/cni-d47e6b6b-663b-549d-56b6-5a96af29d3d3" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.163 [INFO][4240] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" iface="eth0" netns="/var/run/netns/cni-d47e6b6b-663b-549d-56b6-5a96af29d3d3" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.163 [INFO][4240] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.163 [INFO][4240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.193 [INFO][4256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" HandleID="k8s-pod-network.c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.193 [INFO][4256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.193 [INFO][4256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.201 [WARNING][4256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" HandleID="k8s-pod-network.c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.201 [INFO][4256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" HandleID="k8s-pod-network.c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.202 [INFO][4256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:14.208420 containerd[1468]: 2024-11-12 20:56:14.206 [INFO][4240] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:14.209197 containerd[1468]: time="2024-11-12T20:56:14.209048363Z" level=info msg="TearDown network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\" successfully" Nov 12 20:56:14.209197 containerd[1468]: time="2024-11-12T20:56:14.209079854Z" level=info msg="StopPodSandbox for \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\" returns successfully" Nov 12 20:56:14.210018 containerd[1468]: time="2024-11-12T20:56:14.209984967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9f949c4-wkcll,Uid:4394043a-88b4-49ad-a98e-6481c4c4b819,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:56:14.215241 systemd[1]: Started cri-containerd-a3517e574de8cedc590b48a4dc72d89fecd8b6aae5c63658d468648c64218c6c.scope - libcontainer container a3517e574de8cedc590b48a4dc72d89fecd8b6aae5c63658d468648c64218c6c. Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.166 [INFO][4241] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.166 [INFO][4241] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" iface="eth0" netns="/var/run/netns/cni-f1396840-28b1-b6d4-7ca1-8184823865eb" Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.166 [INFO][4241] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" iface="eth0" netns="/var/run/netns/cni-f1396840-28b1-b6d4-7ca1-8184823865eb" Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.166 [INFO][4241] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" iface="eth0" netns="/var/run/netns/cni-f1396840-28b1-b6d4-7ca1-8184823865eb" Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.167 [INFO][4241] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.167 [INFO][4241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.203 [INFO][4261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" HandleID="k8s-pod-network.00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.203 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.203 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.213 [WARNING][4261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" HandleID="k8s-pod-network.00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.213 [INFO][4261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" HandleID="k8s-pod-network.00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.214 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:14.221297 containerd[1468]: 2024-11-12 20:56:14.218 [INFO][4241] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:14.221669 containerd[1468]: time="2024-11-12T20:56:14.221500443Z" level=info msg="TearDown network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\" successfully" Nov 12 20:56:14.221669 containerd[1468]: time="2024-11-12T20:56:14.221530932Z" level=info msg="StopPodSandbox for \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\" returns successfully" Nov 12 20:56:14.222665 containerd[1468]: time="2024-11-12T20:56:14.222443820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pl6sb,Uid:98f557dd-e3c8-4561-ad63-16e2919af7c9,Namespace:calico-system,Attempt:1,}" Nov 12 20:56:14.284385 containerd[1468]: time="2024-11-12T20:56:14.284331435Z" level=info msg="StartContainer for \"a3517e574de8cedc590b48a4dc72d89fecd8b6aae5c63658d468648c64218c6c\" returns successfully" Nov 12 20:56:14.335111 kubelet[2514]: E1112 20:56:14.335059 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:14.348331 systemd[1]: run-netns-cni\x2df1396840\x2d28b1\x2db6d4\x2d7ca1\x2d8184823865eb.mount: Deactivated successfully. Nov 12 20:56:14.348447 systemd[1]: run-netns-cni\x2dd47e6b6b\x2d663b\x2d549d\x2d56b6\x2d5a96af29d3d3.mount: Deactivated successfully. Nov 12 20:56:14.359780 kubelet[2514]: I1112 20:56:14.357520 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-z9sfr" podStartSLOduration=54.35749643 podStartE2EDuration="54.35749643s" podCreationTimestamp="2024-11-12 20:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:14.349194929 +0000 UTC m=+59.495825377" watchObservedRunningTime="2024-11-12 20:56:14.35749643 +0000 UTC m=+59.504126838" Nov 12 20:56:14.430307 systemd-networkd[1397]: cali02aaec37a37: Link UP Nov 12 20:56:14.431614 systemd-networkd[1397]: cali02aaec37a37: Gained carrier Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.328 [INFO][4316] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pl6sb-eth0 csi-node-driver- calico-system 98f557dd-e3c8-4561-ad63-16e2919af7c9 966 0 2024-11-12 20:55:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:548d65b7bf k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pl6sb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali02aaec37a37 [] []}} ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Namespace="calico-system" Pod="csi-node-driver-pl6sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pl6sb-" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.328 [INFO][4316] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Namespace="calico-system" Pod="csi-node-driver-pl6sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.379 [INFO][4340] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" HandleID="k8s-pod-network.ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.389 [INFO][4340] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" HandleID="k8s-pod-network.ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000525520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pl6sb", "timestamp":"2024-11-12 20:56:14.379755363 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.390 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.391 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.391 [INFO][4340] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.394 [INFO][4340] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" host="localhost" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.398 [INFO][4340] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.403 [INFO][4340] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.405 [INFO][4340] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.407 [INFO][4340] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.408 [INFO][4340] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" host="localhost" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.409 [INFO][4340] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.417 [INFO][4340] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" host="localhost" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.423 [INFO][4340] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" host="localhost" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.423 [INFO][4340] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" host="localhost" Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.423 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:14.464643 containerd[1468]: 2024-11-12 20:56:14.423 [INFO][4340] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" HandleID="k8s-pod-network.ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.465460 containerd[1468]: 2024-11-12 20:56:14.427 [INFO][4316] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Namespace="calico-system" Pod="csi-node-driver-pl6sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pl6sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pl6sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98f557dd-e3c8-4561-ad63-16e2919af7c9", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pl6sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02aaec37a37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:14.465460 containerd[1468]: 2024-11-12 20:56:14.427 [INFO][4316] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Namespace="calico-system" Pod="csi-node-driver-pl6sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.465460 containerd[1468]: 2024-11-12 20:56:14.427 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02aaec37a37 ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Namespace="calico-system" Pod="csi-node-driver-pl6sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.465460 containerd[1468]: 2024-11-12 20:56:14.432 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Namespace="calico-system" Pod="csi-node-driver-pl6sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.465460 containerd[1468]: 2024-11-12 20:56:14.432 [INFO][4316] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Namespace="calico-system" Pod="csi-node-driver-pl6sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pl6sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pl6sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98f557dd-e3c8-4561-ad63-16e2919af7c9", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b", Pod:"csi-node-driver-pl6sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02aaec37a37", MAC:"ee:c6:c4:53:e3:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:14.465460 containerd[1468]: 2024-11-12 20:56:14.461 [INFO][4316] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b" Namespace="calico-system" Pod="csi-node-driver-pl6sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:14.491018 containerd[1468]: time="2024-11-12T20:56:14.486824274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:14.491018 containerd[1468]: time="2024-11-12T20:56:14.486909577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:14.491018 containerd[1468]: time="2024-11-12T20:56:14.486923945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:14.491018 containerd[1468]: time="2024-11-12T20:56:14.487014648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:14.522147 systemd[1]: Started cri-containerd-ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b.scope - libcontainer container ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b. Nov 12 20:56:14.538151 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:56:14.538988 systemd-networkd[1397]: cali88917f17e13: Link UP Nov 12 20:56:14.542149 systemd-networkd[1397]: cali88917f17e13: Gained carrier Nov 12 20:56:14.554554 containerd[1468]: time="2024-11-12T20:56:14.554479425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pl6sb,Uid:98f557dd-e3c8-4561-ad63-16e2919af7c9,Namespace:calico-system,Attempt:1,} returns sandbox id \"ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b\"" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.317 [INFO][4303] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0 calico-apiserver-5bb9f949c4- calico-apiserver 4394043a-88b4-49ad-a98e-6481c4c4b819 965 0 2024-11-12 20:55:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bb9f949c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bb9f949c4-wkcll eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali88917f17e13 [] []}} ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-wkcll" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.317 [INFO][4303] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-wkcll" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.382 [INFO][4335] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" HandleID="k8s-pod-network.6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.393 [INFO][4335] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" HandleID="k8s-pod-network.6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bb9f949c4-wkcll", "timestamp":"2024-11-12 20:56:14.381999241 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.393 [INFO][4335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.423 [INFO][4335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.423 [INFO][4335] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.495 [INFO][4335] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" host="localhost" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.501 [INFO][4335] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.516 [INFO][4335] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.518 [INFO][4335] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.519 [INFO][4335] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.520 [INFO][4335] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" host="localhost" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.521 [INFO][4335] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.525 [INFO][4335] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" host="localhost" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.531 [INFO][4335] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" host="localhost" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.531 [INFO][4335] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" host="localhost" Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.531 [INFO][4335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:14.559185 containerd[1468]: 2024-11-12 20:56:14.531 [INFO][4335] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" HandleID="k8s-pod-network.6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.560081 containerd[1468]: 2024-11-12 20:56:14.536 [INFO][4303] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-wkcll" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0", GenerateName:"calico-apiserver-5bb9f949c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4394043a-88b4-49ad-a98e-6481c4c4b819", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9f949c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bb9f949c4-wkcll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88917f17e13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:14.560081 containerd[1468]: 2024-11-12 20:56:14.536 [INFO][4303] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-wkcll" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.560081 containerd[1468]: 2024-11-12 20:56:14.536 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88917f17e13 ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-wkcll" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.560081 containerd[1468]: 2024-11-12 20:56:14.542 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-wkcll" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.560081 containerd[1468]: 2024-11-12 20:56:14.542 [INFO][4303] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-wkcll" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0", GenerateName:"calico-apiserver-5bb9f949c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4394043a-88b4-49ad-a98e-6481c4c4b819", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9f949c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db", Pod:"calico-apiserver-5bb9f949c4-wkcll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88917f17e13", MAC:"6a:4d:08:fc:2d:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:14.560081 containerd[1468]: 2024-11-12 20:56:14.553 [INFO][4303] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9f949c4-wkcll" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:14.582340 containerd[1468]: time="2024-11-12T20:56:14.582079839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:14.582340 containerd[1468]: time="2024-11-12T20:56:14.582285393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:14.582340 containerd[1468]: time="2024-11-12T20:56:14.582305291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:14.582592 containerd[1468]: time="2024-11-12T20:56:14.582447884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:14.617031 systemd[1]: Started cri-containerd-6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db.scope - libcontainer container 6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db. Nov 12 20:56:14.620766 systemd-networkd[1397]: cali860f8793486: Gained IPv6LL Nov 12 20:56:14.630904 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:56:14.654712 containerd[1468]: time="2024-11-12T20:56:14.654659463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9f949c4-wkcll,Uid:4394043a-88b4-49ad-a98e-6481c4c4b819,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db\"" Nov 12 20:56:14.944751 containerd[1468]: time="2024-11-12T20:56:14.944700715Z" level=info msg="StopPodSandbox for \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\"" Nov 12 20:56:15.003070 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:14.988 [WARNING][4484] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b0285c38-1cc8-4609-bd2b-a0cdab6d4401", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122", Pod:"coredns-6f6b679f8f-z9sfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51715fbb905", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:14.988 [INFO][4484] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:14.988 [INFO][4484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" iface="eth0" netns="" Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:14.988 [INFO][4484] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:14.988 [INFO][4484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:15.020 [INFO][4494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" HandleID="k8s-pod-network.d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:15.020 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:15.020 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:15.027 [WARNING][4494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" HandleID="k8s-pod-network.d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:15.027 [INFO][4494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" HandleID="k8s-pod-network.d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:15.028 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:15.034330 containerd[1468]: 2024-11-12 20:56:15.031 [INFO][4484] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:15.034807 containerd[1468]: time="2024-11-12T20:56:15.034378875Z" level=info msg="TearDown network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\" successfully" Nov 12 20:56:15.034807 containerd[1468]: time="2024-11-12T20:56:15.034409233Z" level=info msg="StopPodSandbox for \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\" returns successfully" Nov 12 20:56:15.035139 containerd[1468]: time="2024-11-12T20:56:15.035097791Z" level=info msg="RemovePodSandbox for \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\"" Nov 12 20:56:15.037505 containerd[1468]: time="2024-11-12T20:56:15.037456797Z" level=info msg="Forcibly stopping sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\"" Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.073 [WARNING][4516] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b0285c38-1cc8-4609-bd2b-a0cdab6d4401", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"593e4e638b6a747d40b96583dcf5e940a15efc982847dae097daf5b5f5911122", Pod:"coredns-6f6b679f8f-z9sfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51715fbb905", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.073 [INFO][4516] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.073 [INFO][4516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" iface="eth0" netns="" Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.074 [INFO][4516] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.074 [INFO][4516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.096 [INFO][4523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" HandleID="k8s-pod-network.d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.097 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.097 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.102 [WARNING][4523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" HandleID="k8s-pod-network.d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.102 [INFO][4523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" HandleID="k8s-pod-network.d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Workload="localhost-k8s-coredns--6f6b679f8f--z9sfr-eth0" Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.104 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:15.109851 containerd[1468]: 2024-11-12 20:56:15.107 [INFO][4516] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe" Nov 12 20:56:15.111171 containerd[1468]: time="2024-11-12T20:56:15.109909154Z" level=info msg="TearDown network for sandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\" successfully" Nov 12 20:56:15.344415 kubelet[2514]: E1112 20:56:15.344364 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:15.451070 systemd-networkd[1397]: cali02aaec37a37: Gained IPv6LL Nov 12 20:56:15.474447 containerd[1468]: time="2024-11-12T20:56:15.474365437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:15.474596 containerd[1468]: time="2024-11-12T20:56:15.474467433Z" level=info msg="RemovePodSandbox \"d0053c4f14e4f5ff545ee620a7c0a50f8f7831a672c93486c993dc232798abbe\" returns successfully" Nov 12 20:56:15.475181 containerd[1468]: time="2024-11-12T20:56:15.475152304Z" level=info msg="StopPodSandbox for \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\"" Nov 12 20:56:15.643060 systemd-networkd[1397]: cali51715fbb905: Gained IPv6LL Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.715 [WARNING][4547] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0", GenerateName:"calico-apiserver-5bb9f949c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e40773a-418c-4130-9a71-ae6e65ef1939", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9f949c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e", Pod:"calico-apiserver-5bb9f949c4-t9xw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali860f8793486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.715 [INFO][4547] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.715 [INFO][4547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" iface="eth0" netns="" Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.715 [INFO][4547] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.715 [INFO][4547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.740 [INFO][4555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" HandleID="k8s-pod-network.c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.740 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.740 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.753 [WARNING][4555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" HandleID="k8s-pod-network.c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.771 [INFO][4555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" HandleID="k8s-pod-network.c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.773 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:15.778751 containerd[1468]: 2024-11-12 20:56:15.775 [INFO][4547] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:15.779514 containerd[1468]: time="2024-11-12T20:56:15.778794661Z" level=info msg="TearDown network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\" successfully" Nov 12 20:56:15.779514 containerd[1468]: time="2024-11-12T20:56:15.778820821Z" level=info msg="StopPodSandbox for \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\" returns successfully" Nov 12 20:56:15.779514 containerd[1468]: time="2024-11-12T20:56:15.779392666Z" level=info msg="RemovePodSandbox for \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\"" Nov 12 20:56:15.779514 containerd[1468]: time="2024-11-12T20:56:15.779429737Z" level=info msg="Forcibly stopping sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\"" Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.918 [WARNING][4579] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0", GenerateName:"calico-apiserver-5bb9f949c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e40773a-418c-4130-9a71-ae6e65ef1939", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9f949c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e", Pod:"calico-apiserver-5bb9f949c4-t9xw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali860f8793486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.919 [INFO][4579] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.919 [INFO][4579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" iface="eth0" netns="" Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.919 [INFO][4579] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.919 [INFO][4579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.942 [INFO][4587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" HandleID="k8s-pod-network.c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.943 [INFO][4587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.943 [INFO][4587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.949 [WARNING][4587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" HandleID="k8s-pod-network.c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.950 [INFO][4587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" HandleID="k8s-pod-network.c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--t9xw4-eth0" Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.951 [INFO][4587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:15.957088 containerd[1468]: 2024-11-12 20:56:15.954 [INFO][4579] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33" Nov 12 20:56:15.957088 containerd[1468]: time="2024-11-12T20:56:15.957048144Z" level=info msg="TearDown network for sandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\" successfully" Nov 12 20:56:15.958190 containerd[1468]: time="2024-11-12T20:56:15.958040384Z" level=info msg="StopPodSandbox for \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\"" Nov 12 20:56:15.958190 containerd[1468]: time="2024-11-12T20:56:15.958071694Z" level=info msg="StopPodSandbox for \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\"" Nov 12 20:56:15.963431 systemd-networkd[1397]: cali88917f17e13: Gained IPv6LL Nov 12 20:56:16.003254 containerd[1468]: time="2024-11-12T20:56:16.003181987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:16.003635 containerd[1468]: time="2024-11-12T20:56:16.003577824Z" level=info msg="RemovePodSandbox \"c16ea1492822d997fa882e45ca7665be1aee5599998f31088d14a9acabce4f33\" returns successfully" Nov 12 20:56:16.004256 containerd[1468]: time="2024-11-12T20:56:16.004216276Z" level=info msg="StopPodSandbox for \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\"" Nov 12 20:56:16.090343 systemd[1]: Started sshd@14-10.0.0.126:22-10.0.0.1:50602.service - OpenSSH per-connection server daemon (10.0.0.1:50602). Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.014 [INFO][4626] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.014 [INFO][4626] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" iface="eth0" netns="/var/run/netns/cni-0d134beb-7f31-3edb-8284-c8a09cc137e4" Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.014 [INFO][4626] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" iface="eth0" netns="/var/run/netns/cni-0d134beb-7f31-3edb-8284-c8a09cc137e4" Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.014 [INFO][4626] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" iface="eth0" netns="/var/run/netns/cni-0d134beb-7f31-3edb-8284-c8a09cc137e4" Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.014 [INFO][4626] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.014 [INFO][4626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.059 [INFO][4651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" HandleID="k8s-pod-network.7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" Workload="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.059 [INFO][4651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.059 [INFO][4651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.103 [WARNING][4651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" HandleID="k8s-pod-network.7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" Workload="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.103 [INFO][4651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" HandleID="k8s-pod-network.7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" Workload="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.110 [INFO][4651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:16.125891 containerd[1468]: 2024-11-12 20:56:16.117 [INFO][4626] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6" Nov 12 20:56:16.127844 containerd[1468]: time="2024-11-12T20:56:16.126559280Z" level=info msg="TearDown network for sandbox \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\" successfully" Nov 12 20:56:16.127844 containerd[1468]: time="2024-11-12T20:56:16.126597202Z" level=info msg="StopPodSandbox for \"7b1e6914f82108a00df40a9b8669e99b41753718ca1baf7d4b2227a9ae1ecdd6\" returns successfully" Nov 12 20:56:16.129468 containerd[1468]: time="2024-11-12T20:56:16.129335612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7795f444d4-qgkp8,Uid:b8c09238-5c26-48aa-9e7a-e74214863f5a,Namespace:calico-system,Attempt:1,}" Nov 12 20:56:16.140132 systemd[1]: run-netns-cni\x2d0d134beb\x2d7f31\x2d3edb\x2d8284\x2dc8a09cc137e4.mount: Deactivated successfully. Nov 12 20:56:16.287405 sshd[4670]: Accepted publickey for core from 10.0.0.1 port 50602 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:16.289426 sshd[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:16.294230 systemd-logind[1450]: New session 15 of user core. Nov 12 20:56:16.304035 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.103 [INFO][4627] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.103 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" iface="eth0" netns="/var/run/netns/cni-0e440f67-41a0-f684-aeb8-4704e95837ab" Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.103 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" iface="eth0" netns="/var/run/netns/cni-0e440f67-41a0-f684-aeb8-4704e95837ab" Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.104 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" iface="eth0" netns="/var/run/netns/cni-0e440f67-41a0-f684-aeb8-4704e95837ab" Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.104 [INFO][4627] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.104 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.226 [INFO][4671] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" HandleID="k8s-pod-network.20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" Workload="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.226 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.226 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.328 [WARNING][4671] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" HandleID="k8s-pod-network.20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" Workload="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.328 [INFO][4671] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" HandleID="k8s-pod-network.20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" Workload="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.330 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:16.334453 containerd[1468]: 2024-11-12 20:56:16.332 [INFO][4627] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55" Nov 12 20:56:16.336142 containerd[1468]: time="2024-11-12T20:56:16.336085113Z" level=info msg="TearDown network for sandbox \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\" successfully" Nov 12 20:56:16.336142 containerd[1468]: time="2024-11-12T20:56:16.336126302Z" level=info msg="StopPodSandbox for \"20eb055235585c9ce687c6e83ced03d6b61d64485502b69a67cb7e4730b16f55\" returns successfully" Nov 12 20:56:16.337796 containerd[1468]: time="2024-11-12T20:56:16.337087862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hgqb9,Uid:8786a82f-424d-47e5-a4b8-3f707927ec39,Namespace:kube-system,Attempt:1,}" Nov 12 20:56:16.337842 kubelet[2514]: E1112 20:56:16.336506 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:16.338024 systemd[1]: run-netns-cni\x2d0e440f67\x2d41a0\x2df684\x2daeb8\x2d4704e95837ab.mount: Deactivated successfully. Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.110 [WARNING][4660] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pl6sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98f557dd-e3c8-4561-ad63-16e2919af7c9", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b", Pod:"csi-node-driver-pl6sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02aaec37a37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.110 [INFO][4660] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.110 [INFO][4660] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" iface="eth0" netns="" Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.110 [INFO][4660] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.110 [INFO][4660] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.252 [INFO][4677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" HandleID="k8s-pod-network.00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.252 [INFO][4677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.330 [INFO][4677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.338 [WARNING][4677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" HandleID="k8s-pod-network.00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.338 [INFO][4677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" HandleID="k8s-pod-network.00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.343 [INFO][4677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:16.348526 containerd[1468]: 2024-11-12 20:56:16.346 [INFO][4660] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:16.349008 kubelet[2514]: E1112 20:56:16.348517 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:16.349452 containerd[1468]: time="2024-11-12T20:56:16.349396323Z" level=info msg="TearDown network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\" successfully" Nov 12 20:56:16.349515 containerd[1468]: time="2024-11-12T20:56:16.349451338Z" level=info msg="StopPodSandbox for \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\" returns successfully" Nov 12 20:56:16.349743 containerd[1468]: time="2024-11-12T20:56:16.349724150Z" level=info msg="RemovePodSandbox for \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\"" Nov 12 20:56:16.349789 containerd[1468]: time="2024-11-12T20:56:16.349746683Z" level=info msg="Forcibly stopping sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\"" Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.386 [WARNING][4703] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pl6sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98f557dd-e3c8-4561-ad63-16e2919af7c9", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b", Pod:"csi-node-driver-pl6sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02aaec37a37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.387 [INFO][4703] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.387 [INFO][4703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" iface="eth0" netns="" Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.387 [INFO][4703] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.387 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.523 [INFO][4711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" HandleID="k8s-pod-network.00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.523 [INFO][4711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.523 [INFO][4711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.529 [WARNING][4711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" HandleID="k8s-pod-network.00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.529 [INFO][4711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" HandleID="k8s-pod-network.00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Workload="localhost-k8s-csi--node--driver--pl6sb-eth0" Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.530 [INFO][4711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:16.536179 containerd[1468]: 2024-11-12 20:56:16.533 [INFO][4703] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27" Nov 12 20:56:16.536635 containerd[1468]: time="2024-11-12T20:56:16.536233816Z" level=info msg="TearDown network for sandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\" successfully" Nov 12 20:56:16.552230 containerd[1468]: time="2024-11-12T20:56:16.551932673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:16.552230 containerd[1468]: time="2024-11-12T20:56:16.552052733Z" level=info msg="RemovePodSandbox \"00a244f9073679bfcdeb77d57abfc4640ada136defe68cc2db27f4579453ef27\" returns successfully" Nov 12 20:56:16.553742 containerd[1468]: time="2024-11-12T20:56:16.552823528Z" level=info msg="StopPodSandbox for \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\"" Nov 12 20:56:16.634405 sshd[4670]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:16.645828 systemd[1]: sshd@14-10.0.0.126:22-10.0.0.1:50602.service: Deactivated successfully. Nov 12 20:56:16.654137 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:56:16.656296 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:56:16.658522 systemd-logind[1450]: Removed session 15. Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.630 [WARNING][4745] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0", GenerateName:"calico-apiserver-5bb9f949c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4394043a-88b4-49ad-a98e-6481c4c4b819", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9f949c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db", Pod:"calico-apiserver-5bb9f949c4-wkcll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88917f17e13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.630 [INFO][4745] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.630 [INFO][4745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" iface="eth0" netns="" Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.630 [INFO][4745] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.630 [INFO][4745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.670 [INFO][4784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" HandleID="k8s-pod-network.c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.671 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.671 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.677 [WARNING][4784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" HandleID="k8s-pod-network.c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.677 [INFO][4784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" HandleID="k8s-pod-network.c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.678 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:16.686403 containerd[1468]: 2024-11-12 20:56:16.682 [INFO][4745] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:16.686403 containerd[1468]: time="2024-11-12T20:56:16.685635602Z" level=info msg="TearDown network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\" successfully" Nov 12 20:56:16.696122 containerd[1468]: time="2024-11-12T20:56:16.685675609Z" level=info msg="StopPodSandbox for \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\" returns successfully" Nov 12 20:56:16.696755 containerd[1468]: time="2024-11-12T20:56:16.696703148Z" level=info msg="RemovePodSandbox for \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\"" Nov 12 20:56:16.697053 containerd[1468]: time="2024-11-12T20:56:16.696761490Z" level=info msg="Forcibly stopping sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\"" Nov 12 20:56:16.859643 systemd-networkd[1397]: calic447a652ecf: Link UP Nov 12 20:56:16.862732 systemd-networkd[1397]: calic447a652ecf: Gained carrier Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.630 [INFO][4746] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0 calico-kube-controllers-7795f444d4- calico-system b8c09238-5c26-48aa-9e7a-e74214863f5a 994 0 2024-11-12 20:55:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7795f444d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7795f444d4-qgkp8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic447a652ecf [] []}} ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Namespace="calico-system" Pod="calico-kube-controllers-7795f444d4-qgkp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.631 [INFO][4746] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Namespace="calico-system" Pod="calico-kube-controllers-7795f444d4-qgkp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.704 [INFO][4793] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" HandleID="k8s-pod-network.aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Workload="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.718 [INFO][4793] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" HandleID="k8s-pod-network.aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Workload="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7795f444d4-qgkp8", "timestamp":"2024-11-12 20:56:16.704019276 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.718 [INFO][4793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.718 [INFO][4793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.718 [INFO][4793] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.722 [INFO][4793] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" host="localhost" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.814 [INFO][4793] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.821 [INFO][4793] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.824 [INFO][4793] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.827 [INFO][4793] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.827 [INFO][4793] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" host="localhost" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.830 [INFO][4793] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.835 [INFO][4793] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" host="localhost" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.844 [INFO][4793] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" host="localhost" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.845 [INFO][4793] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" host="localhost" Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.845 [INFO][4793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:16.882987 containerd[1468]: 2024-11-12 20:56:16.845 [INFO][4793] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" HandleID="k8s-pod-network.aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Workload="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.884416 containerd[1468]: 2024-11-12 20:56:16.851 [INFO][4746] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Namespace="calico-system" Pod="calico-kube-controllers-7795f444d4-qgkp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0", GenerateName:"calico-kube-controllers-7795f444d4-", Namespace:"calico-system", SelfLink:"", UID:"b8c09238-5c26-48aa-9e7a-e74214863f5a", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7795f444d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7795f444d4-qgkp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic447a652ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:16.884416 containerd[1468]: 2024-11-12 20:56:16.851 [INFO][4746] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Namespace="calico-system" Pod="calico-kube-controllers-7795f444d4-qgkp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.884416 containerd[1468]: 2024-11-12 20:56:16.851 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic447a652ecf ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Namespace="calico-system" Pod="calico-kube-controllers-7795f444d4-qgkp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.884416 containerd[1468]: 2024-11-12 20:56:16.864 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Namespace="calico-system" Pod="calico-kube-controllers-7795f444d4-qgkp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.884416 containerd[1468]: 2024-11-12 20:56:16.864 [INFO][4746] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Namespace="calico-system" Pod="calico-kube-controllers-7795f444d4-qgkp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0", GenerateName:"calico-kube-controllers-7795f444d4-", Namespace:"calico-system", SelfLink:"", UID:"b8c09238-5c26-48aa-9e7a-e74214863f5a", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7795f444d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce", Pod:"calico-kube-controllers-7795f444d4-qgkp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic447a652ecf", MAC:"4a:3e:2b:87:6a:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:16.884416 containerd[1468]: 2024-11-12 20:56:16.879 [INFO][4746] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce" Namespace="calico-system" Pod="calico-kube-controllers-7795f444d4-qgkp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7795f444d4--qgkp8-eth0" Nov 12 20:56:16.942035 containerd[1468]: time="2024-11-12T20:56:16.938791866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:16.946195 containerd[1468]: time="2024-11-12T20:56:16.945949781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:16.946195 containerd[1468]: time="2024-11-12T20:56:16.946003284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:16.946195 containerd[1468]: time="2024-11-12T20:56:16.946137360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:17.007852 systemd[1]: Started cri-containerd-aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce.scope - libcontainer container aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce. Nov 12 20:56:17.012594 systemd-networkd[1397]: calie43b27c9358: Link UP Nov 12 20:56:17.016346 systemd-networkd[1397]: calie43b27c9358: Gained carrier Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.637 [INFO][4762] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0 coredns-6f6b679f8f- kube-system 8786a82f-424d-47e5-a4b8-3f707927ec39 995 0 2024-11-12 20:55:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hgqb9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie43b27c9358 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Namespace="kube-system" Pod="coredns-6f6b679f8f-hgqb9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hgqb9-" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.637 [INFO][4762] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Namespace="kube-system" Pod="coredns-6f6b679f8f-hgqb9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.709 [INFO][4798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" HandleID="k8s-pod-network.9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Workload="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.719 [INFO][4798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" HandleID="k8s-pod-network.9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Workload="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000361b50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hgqb9", "timestamp":"2024-11-12 20:56:16.709308847 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.719 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.845 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.845 [INFO][4798] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.850 [INFO][4798] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" host="localhost" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.916 [INFO][4798] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.929 [INFO][4798] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.938 [INFO][4798] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.948 [INFO][4798] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.948 [INFO][4798] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" host="localhost" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.964 [INFO][4798] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0 Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.976 [INFO][4798] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" host="localhost" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.990 [INFO][4798] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" host="localhost" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.990 [INFO][4798] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" host="localhost" Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.990 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:17.035853 containerd[1468]: 2024-11-12 20:56:16.990 [INFO][4798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" HandleID="k8s-pod-network.9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Workload="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:17.036949 containerd[1468]: 2024-11-12 20:56:17.001 [INFO][4762] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Namespace="kube-system" Pod="coredns-6f6b679f8f-hgqb9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8786a82f-424d-47e5-a4b8-3f707927ec39", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hgqb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie43b27c9358", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:17.036949 containerd[1468]: 2024-11-12 20:56:17.001 [INFO][4762] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Namespace="kube-system" Pod="coredns-6f6b679f8f-hgqb9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:17.036949 containerd[1468]: 2024-11-12 20:56:17.001 [INFO][4762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie43b27c9358 ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Namespace="kube-system" Pod="coredns-6f6b679f8f-hgqb9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:17.036949 containerd[1468]: 2024-11-12 20:56:17.015 [INFO][4762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Namespace="kube-system" Pod="coredns-6f6b679f8f-hgqb9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:17.036949 containerd[1468]: 2024-11-12 20:56:17.016 [INFO][4762] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Namespace="kube-system" Pod="coredns-6f6b679f8f-hgqb9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8786a82f-424d-47e5-a4b8-3f707927ec39", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0", Pod:"coredns-6f6b679f8f-hgqb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie43b27c9358", MAC:"26:bc:f9:c7:57:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:17.036949 containerd[1468]: 2024-11-12 20:56:17.031 [INFO][4762] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0" Namespace="kube-system" Pod="coredns-6f6b679f8f-hgqb9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hgqb9-eth0" Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:16.764 [WARNING][4824] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0", GenerateName:"calico-apiserver-5bb9f949c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4394043a-88b4-49ad-a98e-6481c4c4b819", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9f949c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db", Pod:"calico-apiserver-5bb9f949c4-wkcll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88917f17e13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:16.764 [INFO][4824] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:16.764 [INFO][4824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" iface="eth0" netns="" Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:16.764 [INFO][4824] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:16.764 [INFO][4824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:16.805 [INFO][4833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" HandleID="k8s-pod-network.c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:16.805 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:16.999 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:17.015 [WARNING][4833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" HandleID="k8s-pod-network.c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:17.015 [INFO][4833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" HandleID="k8s-pod-network.c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Workload="localhost-k8s-calico--apiserver--5bb9f949c4--wkcll-eth0" Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:17.026 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:17.044562 containerd[1468]: 2024-11-12 20:56:17.037 [INFO][4824] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b" Nov 12 20:56:17.045219 containerd[1468]: time="2024-11-12T20:56:17.044662240Z" level=info msg="TearDown network for sandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\" successfully" Nov 12 20:56:17.049841 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:56:17.058711 containerd[1468]: time="2024-11-12T20:56:17.058636454Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:17.060164 containerd[1468]: time="2024-11-12T20:56:17.059301977Z" level=info msg="RemovePodSandbox \"c0108b92bf971d9444bd385bd3267e0f9047d4c4a24990c7af161aad5c9e1b0b\" returns successfully" Nov 12 20:56:17.070831 containerd[1468]: time="2024-11-12T20:56:17.070491857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:17.070831 containerd[1468]: time="2024-11-12T20:56:17.070611716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:17.070831 containerd[1468]: time="2024-11-12T20:56:17.070650509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:17.076015 containerd[1468]: time="2024-11-12T20:56:17.075938853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:17.102345 systemd[1]: Started cri-containerd-9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0.scope - libcontainer container 9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0. Nov 12 20:56:17.125911 containerd[1468]: time="2024-11-12T20:56:17.125711478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7795f444d4-qgkp8,Uid:b8c09238-5c26-48aa-9e7a-e74214863f5a,Namespace:calico-system,Attempt:1,} returns sandbox id \"aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce\"" Nov 12 20:56:17.136963 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:56:17.169641 containerd[1468]: time="2024-11-12T20:56:17.169582086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hgqb9,Uid:8786a82f-424d-47e5-a4b8-3f707927ec39,Namespace:kube-system,Attempt:1,} returns sandbox id \"9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0\"" Nov 12 20:56:17.170806 kubelet[2514]: E1112 20:56:17.170749 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:17.175232 containerd[1468]: time="2024-11-12T20:56:17.175188558Z" level=info msg="CreateContainer within sandbox \"9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:56:17.194913 containerd[1468]: time="2024-11-12T20:56:17.194768001Z" level=info msg="CreateContainer within sandbox \"9de281ab1cc984d5cc1dc53bff78c51334b364b3f915a2721b1a32f3d35e38d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00241c4e0e2e9a2f51dda476cc130a3950b3504565ed25268040809b02e3d01a\"" Nov 12 20:56:17.196487 containerd[1468]: time="2024-11-12T20:56:17.195492306Z" level=info msg="StartContainer for \"00241c4e0e2e9a2f51dda476cc130a3950b3504565ed25268040809b02e3d01a\"" Nov 12 20:56:17.197222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211666086.mount: Deactivated successfully. Nov 12 20:56:17.235184 systemd[1]: Started cri-containerd-00241c4e0e2e9a2f51dda476cc130a3950b3504565ed25268040809b02e3d01a.scope - libcontainer container 00241c4e0e2e9a2f51dda476cc130a3950b3504565ed25268040809b02e3d01a. Nov 12 20:56:17.267701 containerd[1468]: time="2024-11-12T20:56:17.267651152Z" level=info msg="StartContainer for \"00241c4e0e2e9a2f51dda476cc130a3950b3504565ed25268040809b02e3d01a\" returns successfully" Nov 12 20:56:17.355649 kubelet[2514]: E1112 20:56:17.355608 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:17.361637 kubelet[2514]: E1112 20:56:17.361600 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:17.378403 kubelet[2514]: I1112 20:56:17.378052 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hgqb9" podStartSLOduration=57.378031533 podStartE2EDuration="57.378031533s" podCreationTimestamp="2024-11-12 20:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:17.377373344 +0000 UTC m=+62.524003782" watchObservedRunningTime="2024-11-12 20:56:17.378031533 +0000 UTC m=+62.524661951" Nov 12 20:56:17.492446 containerd[1468]: time="2024-11-12T20:56:17.492375373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:17.493169 containerd[1468]: time="2024-11-12T20:56:17.493104067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:56:17.494320 containerd[1468]: time="2024-11-12T20:56:17.494259526Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:17.496889 containerd[1468]: time="2024-11-12T20:56:17.496820485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:17.497678 containerd[1468]: time="2024-11-12T20:56:17.497611968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 3.690421114s" Nov 12 20:56:17.497678 containerd[1468]: time="2024-11-12T20:56:17.497670089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:56:17.499391 containerd[1468]: time="2024-11-12T20:56:17.499358488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:56:17.500549 containerd[1468]: time="2024-11-12T20:56:17.500507605Z" level=info msg="CreateContainer within sandbox \"8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:56:17.514558 containerd[1468]: time="2024-11-12T20:56:17.514483363Z" level=info msg="CreateContainer within sandbox \"8b198594bde9e9d7cddf3f021841eabeb9febec7e0b496e085d8f0ddb817b10e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f7bd1fb3226d864657a6459ba4b50532cdb5b1dbd492489d3fa5562d7493e0f\"" Nov 12 20:56:17.515112 containerd[1468]: time="2024-11-12T20:56:17.515073992Z" level=info msg="StartContainer for \"5f7bd1fb3226d864657a6459ba4b50532cdb5b1dbd492489d3fa5562d7493e0f\"" Nov 12 20:56:17.552347 systemd[1]: Started cri-containerd-5f7bd1fb3226d864657a6459ba4b50532cdb5b1dbd492489d3fa5562d7493e0f.scope - libcontainer container 5f7bd1fb3226d864657a6459ba4b50532cdb5b1dbd492489d3fa5562d7493e0f. Nov 12 20:56:17.670236 containerd[1468]: time="2024-11-12T20:56:17.669778278Z" level=info msg="StartContainer for \"5f7bd1fb3226d864657a6459ba4b50532cdb5b1dbd492489d3fa5562d7493e0f\" returns successfully" Nov 12 20:56:18.362673 kubelet[2514]: E1112 20:56:18.362390 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:18.397061 systemd-networkd[1397]: calie43b27c9358: Gained IPv6LL Nov 12 20:56:18.455733 kubelet[2514]: I1112 20:56:18.454661 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bb9f949c4-t9xw4" podStartSLOduration=39.762916069 podStartE2EDuration="43.454639013s" podCreationTimestamp="2024-11-12 20:55:35 +0000 UTC" firstStartedPulling="2024-11-12 20:56:13.806893525 +0000 UTC m=+58.953523943" lastFinishedPulling="2024-11-12 20:56:17.498616479 +0000 UTC m=+62.645246887" observedRunningTime="2024-11-12 20:56:18.454409514 +0000 UTC m=+63.601039932" watchObservedRunningTime="2024-11-12 20:56:18.454639013 +0000 UTC m=+63.601269431" Nov 12 20:56:18.715095 systemd-networkd[1397]: calic447a652ecf: Gained IPv6LL Nov 12 20:56:19.225160 containerd[1468]: time="2024-11-12T20:56:19.225091136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:19.226101 containerd[1468]: time="2024-11-12T20:56:19.226030800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:56:19.227790 containerd[1468]: time="2024-11-12T20:56:19.227760275Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:19.230293 containerd[1468]: time="2024-11-12T20:56:19.230224713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:19.231240 containerd[1468]: time="2024-11-12T20:56:19.231194115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.731795902s" Nov 12 20:56:19.231330 containerd[1468]: time="2024-11-12T20:56:19.231242548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:56:19.232688 containerd[1468]: time="2024-11-12T20:56:19.232662030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:56:19.235361 containerd[1468]: time="2024-11-12T20:56:19.235327793Z" level=info msg="CreateContainer within sandbox \"ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:56:19.256811 containerd[1468]: time="2024-11-12T20:56:19.256761424Z" level=info msg="CreateContainer within sandbox \"ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b78da2e7dc6750357b63784ea34ec43d6f54c6533fd44288fd68c5caf6130e95\"" Nov 12 20:56:19.257543 containerd[1468]: time="2024-11-12T20:56:19.257497691Z" level=info msg="StartContainer for \"b78da2e7dc6750357b63784ea34ec43d6f54c6533fd44288fd68c5caf6130e95\"" Nov 12 20:56:19.308185 systemd[1]: Started cri-containerd-b78da2e7dc6750357b63784ea34ec43d6f54c6533fd44288fd68c5caf6130e95.scope - libcontainer container b78da2e7dc6750357b63784ea34ec43d6f54c6533fd44288fd68c5caf6130e95. Nov 12 20:56:19.344117 containerd[1468]: time="2024-11-12T20:56:19.344047351Z" level=info msg="StartContainer for \"b78da2e7dc6750357b63784ea34ec43d6f54c6533fd44288fd68c5caf6130e95\" returns successfully" Nov 12 20:56:19.366393 kubelet[2514]: E1112 20:56:19.366328 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:19.792281 containerd[1468]: time="2024-11-12T20:56:19.792204866Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:19.793226 containerd[1468]: time="2024-11-12T20:56:19.793173948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:56:19.795971 containerd[1468]: time="2024-11-12T20:56:19.795891650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 563.172481ms" Nov 12 20:56:19.795971 containerd[1468]: time="2024-11-12T20:56:19.795941266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:56:19.797113 containerd[1468]: time="2024-11-12T20:56:19.797083597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:56:19.798438 containerd[1468]: time="2024-11-12T20:56:19.798395034Z" level=info msg="CreateContainer within sandbox \"6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:56:19.816288 containerd[1468]: time="2024-11-12T20:56:19.816183169Z" level=info msg="CreateContainer within sandbox \"6f0fc33a43f34da160d37e754f19ed98fc19603a2033f623b7045033f02d28db\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"64edd991dfbad13cf2cf1e44dce68fdf6fa8a47cd6cc0053e7c1dddfc1790378\"" Nov 12 20:56:19.817050 containerd[1468]: time="2024-11-12T20:56:19.816977276Z" level=info msg="StartContainer for \"64edd991dfbad13cf2cf1e44dce68fdf6fa8a47cd6cc0053e7c1dddfc1790378\"" Nov 12 20:56:19.863254 systemd[1]: Started cri-containerd-64edd991dfbad13cf2cf1e44dce68fdf6fa8a47cd6cc0053e7c1dddfc1790378.scope - libcontainer container 64edd991dfbad13cf2cf1e44dce68fdf6fa8a47cd6cc0053e7c1dddfc1790378. Nov 12 20:56:19.908635 containerd[1468]: time="2024-11-12T20:56:19.908587947Z" level=info msg="StartContainer for \"64edd991dfbad13cf2cf1e44dce68fdf6fa8a47cd6cc0053e7c1dddfc1790378\" returns successfully" Nov 12 20:56:20.410642 kubelet[2514]: I1112 20:56:20.410563 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bb9f949c4-wkcll" podStartSLOduration=40.269568003 podStartE2EDuration="45.41053577s" podCreationTimestamp="2024-11-12 20:55:35 +0000 UTC" firstStartedPulling="2024-11-12 20:56:14.655881984 +0000 UTC m=+59.802512402" lastFinishedPulling="2024-11-12 20:56:19.796849751 +0000 UTC m=+64.943480169" observedRunningTime="2024-11-12 20:56:20.409387247 +0000 UTC m=+65.556017665" watchObservedRunningTime="2024-11-12 20:56:20.41053577 +0000 UTC m=+65.557166188" Nov 12 20:56:21.371112 kubelet[2514]: I1112 20:56:21.371063 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:56:21.658428 systemd[1]: Started sshd@15-10.0.0.126:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Nov 12 20:56:21.704309 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:21.706828 sshd[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:21.712958 systemd-logind[1450]: New session 16 of user core. Nov 12 20:56:21.723133 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:56:22.092654 sshd[5130]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:22.097208 systemd[1]: sshd@15-10.0.0.126:22-10.0.0.1:50606.service: Deactivated successfully. Nov 12 20:56:22.099630 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:56:22.100468 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:56:22.101574 systemd-logind[1450]: Removed session 16. Nov 12 20:56:22.831724 kubelet[2514]: I1112 20:56:22.831647 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:56:24.217383 containerd[1468]: time="2024-11-12T20:56:24.217309116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:24.219026 containerd[1468]: time="2024-11-12T20:56:24.218966406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:56:24.221006 containerd[1468]: time="2024-11-12T20:56:24.220971909Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:24.223461 containerd[1468]: time="2024-11-12T20:56:24.223417502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:24.224696 containerd[1468]: time="2024-11-12T20:56:24.224438429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 4.427315896s" Nov 12 20:56:24.224696 containerd[1468]: time="2024-11-12T20:56:24.224481541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:56:24.225894 containerd[1468]: time="2024-11-12T20:56:24.225703130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:56:24.240021 containerd[1468]: time="2024-11-12T20:56:24.239967155Z" level=info msg="CreateContainer within sandbox \"aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:56:24.264280 containerd[1468]: time="2024-11-12T20:56:24.264128569Z" level=info msg="CreateContainer within sandbox \"aead6bfbe95f74928d3663e36344aca6f417552b045f7e552b85f93b7f8f3dce\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ffa6bb2cbce9010ea318e6b0646e4343284f3f8ad61ffa230752952ee59ce5c2\"" Nov 12 20:56:24.264716 containerd[1468]: time="2024-11-12T20:56:24.264680913Z" level=info msg="StartContainer for \"ffa6bb2cbce9010ea318e6b0646e4343284f3f8ad61ffa230752952ee59ce5c2\"" Nov 12 20:56:24.317428 systemd[1]: Started cri-containerd-ffa6bb2cbce9010ea318e6b0646e4343284f3f8ad61ffa230752952ee59ce5c2.scope - libcontainer container ffa6bb2cbce9010ea318e6b0646e4343284f3f8ad61ffa230752952ee59ce5c2. Nov 12 20:56:24.367819 containerd[1468]: time="2024-11-12T20:56:24.367766020Z" level=info msg="StartContainer for \"ffa6bb2cbce9010ea318e6b0646e4343284f3f8ad61ffa230752952ee59ce5c2\" returns successfully" Nov 12 20:56:24.406128 kubelet[2514]: I1112 20:56:24.406059 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7795f444d4-qgkp8" podStartSLOduration=42.308549562 podStartE2EDuration="49.406024842s" podCreationTimestamp="2024-11-12 20:55:35 +0000 UTC" firstStartedPulling="2024-11-12 20:56:17.128073225 +0000 UTC m=+62.274703643" lastFinishedPulling="2024-11-12 20:56:24.225548505 +0000 UTC m=+69.372178923" observedRunningTime="2024-11-12 20:56:24.405294069 +0000 UTC m=+69.551924497" watchObservedRunningTime="2024-11-12 20:56:24.406024842 +0000 UTC m=+69.552655260" Nov 12 20:56:24.982016 kubelet[2514]: E1112 20:56:24.981854 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:26.073232 containerd[1468]: time="2024-11-12T20:56:26.073140856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:26.074178 containerd[1468]: time="2024-11-12T20:56:26.074108909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:56:26.075736 containerd[1468]: time="2024-11-12T20:56:26.075674953Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:26.079629 containerd[1468]: time="2024-11-12T20:56:26.079578280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:26.080548 containerd[1468]: time="2024-11-12T20:56:26.080486850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.854742793s" Nov 12 20:56:26.080618 containerd[1468]: time="2024-11-12T20:56:26.080545973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:56:26.083238 containerd[1468]: time="2024-11-12T20:56:26.083191493Z" level=info msg="CreateContainer within sandbox \"ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:56:26.104007 containerd[1468]: time="2024-11-12T20:56:26.103947265Z" level=info msg="CreateContainer within sandbox \"ca41dfffa60219b2a84e454affbfef75f4874400f140c41f66bea3989064b68b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b9402add8ee9e20a76a9f885c9c2343b700069459cfb889e5259c9d048875e85\"" Nov 12 20:56:26.105519 containerd[1468]: time="2024-11-12T20:56:26.105476799Z" level=info msg="StartContainer for \"b9402add8ee9e20a76a9f885c9c2343b700069459cfb889e5259c9d048875e85\"" Nov 12 20:56:26.176115 systemd[1]: Started cri-containerd-b9402add8ee9e20a76a9f885c9c2343b700069459cfb889e5259c9d048875e85.scope - libcontainer container b9402add8ee9e20a76a9f885c9c2343b700069459cfb889e5259c9d048875e85. Nov 12 20:56:26.219976 containerd[1468]: time="2024-11-12T20:56:26.219904295Z" level=info msg="StartContainer for \"b9402add8ee9e20a76a9f885c9c2343b700069459cfb889e5259c9d048875e85\" returns successfully" Nov 12 20:56:26.235734 systemd[1]: run-containerd-runc-k8s.io-b9402add8ee9e20a76a9f885c9c2343b700069459cfb889e5259c9d048875e85-runc.8SsGJE.mount: Deactivated successfully. Nov 12 20:56:26.529933 kubelet[2514]: I1112 20:56:26.529818 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pl6sb" podStartSLOduration=40.005656744 podStartE2EDuration="51.52979026s" podCreationTimestamp="2024-11-12 20:55:35 +0000 UTC" firstStartedPulling="2024-11-12 20:56:14.557329915 +0000 UTC m=+59.703960333" lastFinishedPulling="2024-11-12 20:56:26.081463431 +0000 UTC m=+71.228093849" observedRunningTime="2024-11-12 20:56:26.529209264 +0000 UTC m=+71.675839682" watchObservedRunningTime="2024-11-12 20:56:26.52979026 +0000 UTC m=+71.676420678" Nov 12 20:56:27.044062 kubelet[2514]: I1112 20:56:27.044015 2514 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:56:27.044062 kubelet[2514]: I1112 20:56:27.044070 2514 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:56:27.105734 systemd[1]: Started sshd@16-10.0.0.126:22-10.0.0.1:48844.service - OpenSSH per-connection server daemon (10.0.0.1:48844). Nov 12 20:56:27.164184 sshd[5288]: Accepted publickey for core from 10.0.0.1 port 48844 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:27.166762 sshd[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:27.173166 systemd-logind[1450]: New session 17 of user core. Nov 12 20:56:27.183079 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:56:27.328947 sshd[5288]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:27.333817 systemd[1]: sshd@16-10.0.0.126:22-10.0.0.1:48844.service: Deactivated successfully. Nov 12 20:56:27.336157 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:56:27.337013 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:56:27.338253 systemd-logind[1450]: Removed session 17. Nov 12 20:56:27.957094 kubelet[2514]: E1112 20:56:27.956966 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:31.956990 kubelet[2514]: E1112 20:56:31.956935 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:32.346175 systemd[1]: Started sshd@17-10.0.0.126:22-10.0.0.1:48850.service - OpenSSH per-connection server daemon (10.0.0.1:48850). Nov 12 20:56:32.412170 sshd[5323]: Accepted publickey for core from 10.0.0.1 port 48850 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:32.414209 sshd[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:32.418783 systemd-logind[1450]: New session 18 of user core. Nov 12 20:56:32.425063 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:56:32.555437 sshd[5323]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:32.565482 systemd[1]: sshd@17-10.0.0.126:22-10.0.0.1:48850.service: Deactivated successfully. Nov 12 20:56:32.568278 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:56:32.570472 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:56:32.577235 systemd[1]: Started sshd@18-10.0.0.126:22-10.0.0.1:48862.service - OpenSSH per-connection server daemon (10.0.0.1:48862). Nov 12 20:56:32.578646 systemd-logind[1450]: Removed session 18. Nov 12 20:56:32.611137 sshd[5337]: Accepted publickey for core from 10.0.0.1 port 48862 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:32.612952 sshd[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:32.618026 systemd-logind[1450]: New session 19 of user core. Nov 12 20:56:32.625045 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:56:33.358696 sshd[5337]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:33.372580 systemd[1]: sshd@18-10.0.0.126:22-10.0.0.1:48862.service: Deactivated successfully. Nov 12 20:56:33.374795 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:56:33.376754 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:56:33.391458 systemd[1]: Started sshd@19-10.0.0.126:22-10.0.0.1:48876.service - OpenSSH per-connection server daemon (10.0.0.1:48876). Nov 12 20:56:33.392768 systemd-logind[1450]: Removed session 19. Nov 12 20:56:33.432072 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 48876 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:33.434511 sshd[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:33.440959 systemd-logind[1450]: New session 20 of user core. Nov 12 20:56:33.452190 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:56:35.366971 sshd[5349]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:35.377699 systemd[1]: sshd@19-10.0.0.126:22-10.0.0.1:48876.service: Deactivated successfully. Nov 12 20:56:35.379630 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:56:35.381137 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:56:35.382780 systemd-logind[1450]: Removed session 20. Nov 12 20:56:35.388533 systemd[1]: Started sshd@20-10.0.0.126:22-10.0.0.1:48882.service - OpenSSH per-connection server daemon (10.0.0.1:48882). Nov 12 20:56:35.428077 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 48882 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:35.430131 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:35.435339 systemd-logind[1450]: New session 21 of user core. Nov 12 20:56:35.445008 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:56:35.750896 sshd[5391]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:35.772274 systemd[1]: sshd@20-10.0.0.126:22-10.0.0.1:48882.service: Deactivated successfully. Nov 12 20:56:35.775628 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:56:35.779428 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:56:35.796700 systemd[1]: Started sshd@21-10.0.0.126:22-10.0.0.1:51634.service - OpenSSH per-connection server daemon (10.0.0.1:51634). Nov 12 20:56:35.801964 systemd-logind[1450]: Removed session 21. Nov 12 20:56:35.834092 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 51634 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:35.836176 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:35.841567 systemd-logind[1450]: New session 22 of user core. Nov 12 20:56:35.855323 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:56:36.067601 sshd[5403]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:36.072688 systemd[1]: sshd@21-10.0.0.126:22-10.0.0.1:51634.service: Deactivated successfully. Nov 12 20:56:36.075258 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:56:36.076000 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:56:36.077207 systemd-logind[1450]: Removed session 22. Nov 12 20:56:41.084191 systemd[1]: Started sshd@22-10.0.0.126:22-10.0.0.1:51636.service - OpenSSH per-connection server daemon (10.0.0.1:51636). Nov 12 20:56:41.121330 sshd[5419]: Accepted publickey for core from 10.0.0.1 port 51636 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:41.123546 sshd[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:41.128458 systemd-logind[1450]: New session 23 of user core. Nov 12 20:56:41.137072 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:56:41.270393 sshd[5419]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:41.275540 systemd[1]: sshd@22-10.0.0.126:22-10.0.0.1:51636.service: Deactivated successfully. Nov 12 20:56:41.277857 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:56:41.278565 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:56:41.279587 systemd-logind[1450]: Removed session 23. Nov 12 20:56:43.957305 kubelet[2514]: E1112 20:56:43.957243 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:44.957192 kubelet[2514]: E1112 20:56:44.957129 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:46.285959 systemd[1]: Started sshd@23-10.0.0.126:22-10.0.0.1:56672.service - OpenSSH per-connection server daemon (10.0.0.1:56672). Nov 12 20:56:46.329679 sshd[5437]: Accepted publickey for core from 10.0.0.1 port 56672 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:46.331699 sshd[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:46.337060 systemd-logind[1450]: New session 24 of user core. Nov 12 20:56:46.342026 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:56:46.480159 sshd[5437]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:46.486947 systemd[1]: sshd@23-10.0.0.126:22-10.0.0.1:56672.service: Deactivated successfully. Nov 12 20:56:46.493194 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:56:46.497134 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:56:46.498351 systemd-logind[1450]: Removed session 24. Nov 12 20:56:51.494738 systemd[1]: Started sshd@24-10.0.0.126:22-10.0.0.1:56678.service - OpenSSH per-connection server daemon (10.0.0.1:56678). Nov 12 20:56:51.533272 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 56678 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:51.535381 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:51.540468 systemd-logind[1450]: New session 25 of user core. Nov 12 20:56:51.553243 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:56:51.675366 sshd[5454]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:51.679886 systemd[1]: sshd@24-10.0.0.126:22-10.0.0.1:56678.service: Deactivated successfully. Nov 12 20:56:51.682271 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:56:51.684306 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:56:51.685625 systemd-logind[1450]: Removed session 25. Nov 12 20:56:54.957954 kubelet[2514]: E1112 20:56:54.957806 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:56.689401 systemd[1]: Started sshd@25-10.0.0.126:22-10.0.0.1:57838.service - OpenSSH per-connection server daemon (10.0.0.1:57838). Nov 12 20:56:56.730266 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 57838 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:56.732087 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:56.736556 systemd-logind[1450]: New session 26 of user core. Nov 12 20:56:56.747024 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:56:56.857182 sshd[5496]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:56.862141 systemd[1]: sshd@25-10.0.0.126:22-10.0.0.1:57838.service: Deactivated successfully. Nov 12 20:56:56.864579 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:56:56.865388 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:56:56.866358 systemd-logind[1450]: Removed session 26.