Nov 12 20:53:05.099552 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:53:05.099575 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:05.099586 kernel: BIOS-provided physical RAM map: Nov 12 20:53:05.099593 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:53:05.099601 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:53:05.099609 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:53:05.099619 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 12 20:53:05.099628 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 12 20:53:05.099636 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:53:05.099645 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 12 20:53:05.099656 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 20:53:05.099662 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:53:05.099668 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 20:53:05.099675 kernel: NX (Execute Disable) protection: active Nov 12 20:53:05.099682 kernel: APIC: Static calls initialized Nov 12 20:53:05.099692 kernel: SMBIOS 2.8 present. Nov 12 20:53:05.099701 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 12 20:53:05.099709 kernel: Hypervisor detected: KVM Nov 12 20:53:05.099719 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:53:05.099727 kernel: kvm-clock: using sched offset of 2786789884 cycles Nov 12 20:53:05.099737 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:53:05.099746 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:53:05.099754 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:53:05.099763 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:53:05.099776 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 12 20:53:05.099786 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:53:05.099797 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:53:05.099805 kernel: Using GB pages for direct mapping Nov 12 20:53:05.099814 kernel: ACPI: Early table checksum verification disabled Nov 12 20:53:05.099822 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 12 20:53:05.099830 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:05.099839 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:05.099848 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:05.099859 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 12 20:53:05.099868 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:05.099877 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:05.099885 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:05.099894 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:05.099918 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Nov 12 20:53:05.099927 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Nov 12 20:53:05.099944 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 12 20:53:05.099957 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Nov 12 20:53:05.099966 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Nov 12 20:53:05.099976 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Nov 12 20:53:05.099985 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Nov 12 20:53:05.099993 kernel: No NUMA configuration found Nov 12 20:53:05.100000 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 12 20:53:05.100010 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 12 20:53:05.100017 kernel: Zone ranges: Nov 12 20:53:05.100025 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:53:05.100032 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 12 20:53:05.100039 kernel: Normal empty Nov 12 20:53:05.100046 kernel: Movable zone start for each node Nov 12 20:53:05.100053 kernel: Early memory node ranges Nov 12 20:53:05.100061 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:53:05.100068 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 12 20:53:05.100075 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 12 20:53:05.100088 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:53:05.100095 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:53:05.100103 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 12 20:53:05.100110 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:53:05.100117 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:53:05.100125 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:53:05.100132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:53:05.100139 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:53:05.100146 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:53:05.100156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:53:05.100163 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:53:05.100170 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:53:05.100180 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:53:05.100190 kernel: TSC deadline timer available Nov 12 20:53:05.100199 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:53:05.100209 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:53:05.100218 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:53:05.100227 kernel: kvm-guest: setup PV sched yield Nov 12 20:53:05.100242 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 12 20:53:05.100251 kernel: Booting paravirtualized kernel on KVM Nov 12 20:53:05.100261 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:53:05.100270 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:53:05.100279 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:53:05.100288 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:53:05.100297 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:53:05.100306 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:53:05.100315 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:53:05.100328 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:05.100345 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:53:05.100355 kernel: random: crng init done Nov 12 20:53:05.100364 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:53:05.100373 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:53:05.100382 kernel: Fallback order for Node 0: 0 Nov 12 20:53:05.100392 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 12 20:53:05.100401 kernel: Policy zone: DMA32 Nov 12 20:53:05.100413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:53:05.100423 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 136900K reserved, 0K cma-reserved) Nov 12 20:53:05.100431 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:53:05.100438 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:53:05.100445 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:53:05.100453 kernel: Dynamic Preempt: voluntary Nov 12 20:53:05.100460 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:53:05.100468 kernel: rcu: RCU event tracing is enabled. Nov 12 20:53:05.100475 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:53:05.100486 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:53:05.100493 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:53:05.100500 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:53:05.100510 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:53:05.100518 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:53:05.100525 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:53:05.100532 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:53:05.100540 kernel: Console: colour VGA+ 80x25 Nov 12 20:53:05.100547 kernel: printk: console [ttyS0] enabled Nov 12 20:53:05.100557 kernel: ACPI: Core revision 20230628 Nov 12 20:53:05.100564 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:53:05.100571 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:53:05.100579 kernel: x2apic enabled Nov 12 20:53:05.100586 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:53:05.100595 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:53:05.100605 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:53:05.100616 kernel: kvm-guest: setup PV IPIs Nov 12 20:53:05.100639 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:53:05.100650 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:53:05.100661 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:53:05.100673 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:53:05.100687 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:53:05.100698 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:53:05.100709 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:53:05.100719 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:53:05.100730 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:53:05.100745 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:53:05.100756 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:53:05.100766 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:53:05.100782 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:53:05.100792 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:53:05.100803 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:53:05.100814 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:53:05.100824 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:53:05.100839 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:53:05.100849 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:53:05.100860 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:53:05.100870 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:53:05.100880 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:53:05.100892 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:53:05.100919 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:53:05.100930 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:53:05.100941 kernel: landlock: Up and running. Nov 12 20:53:05.100956 kernel: SELinux: Initializing. Nov 12 20:53:05.100966 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:53:05.100977 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:53:05.100988 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:53:05.100998 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:53:05.101008 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:53:05.101023 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:53:05.101033 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:53:05.101043 kernel: ... version: 0 Nov 12 20:53:05.101058 kernel: ... bit width: 48 Nov 12 20:53:05.101068 kernel: ... generic registers: 6 Nov 12 20:53:05.101079 kernel: ... value mask: 0000ffffffffffff Nov 12 20:53:05.101089 kernel: ... max period: 00007fffffffffff Nov 12 20:53:05.101100 kernel: ... fixed-purpose events: 0 Nov 12 20:53:05.101110 kernel: ... event mask: 000000000000003f Nov 12 20:53:05.101121 kernel: signal: max sigframe size: 1776 Nov 12 20:53:05.101132 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:53:05.101143 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:53:05.101158 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:53:05.101169 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:53:05.101180 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:53:05.101191 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:53:05.101202 kernel: smpboot: Max logical packages: 1 Nov 12 20:53:05.101212 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:53:05.101224 kernel: devtmpfs: initialized Nov 12 20:53:05.101234 kernel: x86/mm: Memory block size: 128MB Nov 12 20:53:05.101245 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:53:05.101261 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:53:05.101272 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:53:05.101283 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:53:05.101293 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:53:05.101305 kernel: audit: type=2000 audit(1731444783.934:1): state=initialized audit_enabled=0 res=1 Nov 12 20:53:05.101316 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:53:05.101326 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:53:05.101337 kernel: cpuidle: using governor menu Nov 12 20:53:05.101358 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:53:05.101374 kernel: dca service started, version 1.12.1 Nov 12 20:53:05.101385 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:53:05.101396 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:53:05.101407 kernel: PCI: Using configuration type 1 for base access Nov 12 20:53:05.101418 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:53:05.101428 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:53:05.101439 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:53:05.101450 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:53:05.101461 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:53:05.101476 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:53:05.101487 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:53:05.101498 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:53:05.101509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:53:05.101520 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:53:05.101531 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:53:05.101542 kernel: ACPI: Interpreter enabled Nov 12 20:53:05.101552 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:53:05.101563 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:53:05.101579 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:53:05.101589 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:53:05.101601 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:53:05.101611 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:53:05.101890 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:53:05.102192 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:53:05.102381 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:53:05.102402 kernel: PCI host bridge to bus 0000:00 Nov 12 20:53:05.102550 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:53:05.102685 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:53:05.102816 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:53:05.102991 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:53:05.103117 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:53:05.103247 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 12 20:53:05.103402 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:53:05.103639 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:53:05.103827 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:53:05.104022 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 12 20:53:05.104190 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 12 20:53:05.104375 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 12 20:53:05.104556 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:53:05.104774 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:53:05.104982 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 12 20:53:05.105181 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 12 20:53:05.105429 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 12 20:53:05.105662 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:53:05.105840 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 20:53:05.106020 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 12 20:53:05.106162 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 12 20:53:05.106324 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:53:05.106488 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 12 20:53:05.106663 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 12 20:53:05.106843 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 12 20:53:05.107034 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 12 20:53:05.107218 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:53:05.107479 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:53:05.107646 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:53:05.107821 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 12 20:53:05.108021 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 12 20:53:05.108222 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:53:05.108405 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 12 20:53:05.108429 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:53:05.108440 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:53:05.108471 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:53:05.108492 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:53:05.108503 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:53:05.108513 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:53:05.108524 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:53:05.108534 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:53:05.108544 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:53:05.108560 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:53:05.108576 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:53:05.108586 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:53:05.108597 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:53:05.108608 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:53:05.108618 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:53:05.108628 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:53:05.108639 kernel: iommu: Default domain type: Translated Nov 12 20:53:05.108650 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:53:05.108666 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:53:05.108677 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:53:05.108689 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:53:05.108699 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 12 20:53:05.108884 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:53:05.109097 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:53:05.109268 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:53:05.109285 kernel: vgaarb: loaded Nov 12 20:53:05.109303 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:53:05.109314 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:53:05.109325 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:53:05.109335 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:53:05.109359 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:53:05.109370 kernel: pnp: PnP ACPI init Nov 12 20:53:05.109580 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:53:05.109599 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:53:05.109616 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:53:05.109627 kernel: NET: Registered PF_INET protocol family Nov 12 20:53:05.109638 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:53:05.109649 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:53:05.109660 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:53:05.109670 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:53:05.109681 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:53:05.109691 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:53:05.109702 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:53:05.109718 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:53:05.109728 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:53:05.109738 kernel: NET: Registered PF_XDP protocol family Nov 12 20:53:05.110021 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:53:05.110189 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:53:05.110356 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:53:05.110515 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:53:05.110670 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:53:05.110839 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 12 20:53:05.110855 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:53:05.110867 kernel: Initialise system trusted keyrings Nov 12 20:53:05.110877 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:53:05.110888 kernel: Key type asymmetric registered Nov 12 20:53:05.110927 kernel: Asymmetric key parser 'x509' registered Nov 12 20:53:05.110940 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:53:05.110951 kernel: io scheduler mq-deadline registered Nov 12 20:53:05.110962 kernel: io scheduler kyber registered Nov 12 20:53:05.110978 kernel: io scheduler bfq registered Nov 12 20:53:05.110989 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:53:05.111001 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:53:05.111012 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:53:05.111023 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:53:05.111034 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:53:05.111045 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:53:05.111056 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:53:05.111066 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:53:05.111081 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:53:05.111272 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:53:05.111290 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:53:05.111461 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:53:05.111622 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:53:04 UTC (1731444784) Nov 12 20:53:05.111784 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:53:05.111800 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:53:05.111811 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:53:05.111828 kernel: Segment Routing with IPv6 Nov 12 20:53:05.111839 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:53:05.111850 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:53:05.111860 kernel: Key type dns_resolver registered Nov 12 20:53:05.111872 kernel: IPI shorthand broadcast: enabled Nov 12 20:53:05.111885 kernel: sched_clock: Marking stable (689003244, 121913863)->(877219583, -66302476) Nov 12 20:53:05.111896 kernel: registered taskstats version 1 Nov 12 20:53:05.111975 kernel: Loading compiled-in X.509 certificates Nov 12 20:53:05.111986 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:53:05.112003 kernel: Key type .fscrypt registered Nov 12 20:53:05.112014 kernel: Key type fscrypt-provisioning registered Nov 12 20:53:05.112024 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:53:05.112035 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:53:05.112046 kernel: ima: No architecture policies found Nov 12 20:53:05.112057 kernel: clk: Disabling unused clocks Nov 12 20:53:05.112068 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:53:05.112079 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:53:05.112090 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:53:05.112106 kernel: Run /init as init process Nov 12 20:53:05.112117 kernel: with arguments: Nov 12 20:53:05.112128 kernel: /init Nov 12 20:53:05.112138 kernel: with environment: Nov 12 20:53:05.112148 kernel: HOME=/ Nov 12 20:53:05.112159 kernel: TERM=linux Nov 12 20:53:05.112169 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:53:05.112182 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:05.112201 systemd[1]: Detected virtualization kvm. Nov 12 20:53:05.112213 systemd[1]: Detected architecture x86-64. Nov 12 20:53:05.112224 systemd[1]: Running in initrd. Nov 12 20:53:05.112235 systemd[1]: No hostname configured, using default hostname. Nov 12 20:53:05.112246 systemd[1]: Hostname set to . Nov 12 20:53:05.112258 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:53:05.112269 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:53:05.112281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:05.112297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:05.112310 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:53:05.112349 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:05.112365 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:53:05.112377 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:53:05.112395 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:53:05.112407 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:53:05.112420 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:05.112432 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:05.112443 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:05.112455 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:05.112467 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:05.112479 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:05.112495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:05.112506 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:05.112519 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:53:05.112531 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:53:05.112543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:05.112555 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:05.112567 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:05.112579 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:05.112590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:53:05.112606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:05.112618 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:53:05.112629 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:53:05.112641 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:05.112652 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:05.112664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:05.112676 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:05.112688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:05.112704 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:53:05.112748 systemd-journald[191]: Collecting audit messages is disabled. Nov 12 20:53:05.112782 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:05.112798 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:05.112810 systemd-journald[191]: Journal started Nov 12 20:53:05.112837 systemd-journald[191]: Runtime Journal (/run/log/journal/36cc87e42da3475ab02d0cd740f932c5) is 6.0M, max 48.4M, 42.3M free. Nov 12 20:53:05.105814 systemd-modules-load[194]: Inserted module 'overlay' Nov 12 20:53:05.148676 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:53:05.148709 kernel: Bridge firewalling registered Nov 12 20:53:05.139627 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 12 20:53:05.151585 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:05.152251 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:05.154864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:05.171141 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:05.175566 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:05.178443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:05.181881 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:05.195362 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:05.198502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:05.201508 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:53:05.205579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:05.220171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:05.223453 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:05.240241 dracut-cmdline[227]: dracut-dracut-053 Nov 12 20:53:05.244654 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:05.265649 systemd-resolved[230]: Positive Trust Anchors: Nov 12 20:53:05.265672 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:05.265704 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:05.268555 systemd-resolved[230]: Defaulting to hostname 'linux'. Nov 12 20:53:05.269768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:05.276247 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:05.338942 kernel: SCSI subsystem initialized Nov 12 20:53:05.348926 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:53:05.358935 kernel: iscsi: registered transport (tcp) Nov 12 20:53:05.384353 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:53:05.384437 kernel: QLogic iSCSI HBA Driver Nov 12 20:53:05.437241 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:05.451053 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:53:05.480947 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:53:05.481021 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:53:05.481037 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:53:05.526939 kernel: raid6: avx2x4 gen() 28104 MB/s Nov 12 20:53:05.543928 kernel: raid6: avx2x2 gen() 30121 MB/s Nov 12 20:53:05.561240 kernel: raid6: avx2x1 gen() 24900 MB/s Nov 12 20:53:05.561323 kernel: raid6: using algorithm avx2x2 gen() 30121 MB/s Nov 12 20:53:05.579392 kernel: raid6: .... xor() 15662 MB/s, rmw enabled Nov 12 20:53:05.579495 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:53:05.606989 kernel: xor: automatically using best checksumming function avx Nov 12 20:53:05.783942 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:53:05.799675 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:05.812079 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:05.830152 systemd-udevd[413]: Using default interface naming scheme 'v255'. Nov 12 20:53:05.835718 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:05.856250 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:53:05.872979 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Nov 12 20:53:05.911222 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:05.923104 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:05.992430 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:06.005363 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:53:06.019783 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:06.023734 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:06.026374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:06.029168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:06.037054 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:53:06.042925 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:53:06.070580 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:53:06.071572 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:53:06.071596 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:53:06.071613 kernel: GPT:9289727 != 19775487 Nov 12 20:53:06.071626 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:53:06.071637 kernel: GPT:9289727 != 19775487 Nov 12 20:53:06.071647 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:53:06.071658 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:53:06.071668 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:53:06.071678 kernel: AES CTR mode by8 optimization enabled Nov 12 20:53:06.054943 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:06.080356 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:06.080490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:06.132526 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:06.133988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:06.134208 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:06.139129 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:06.148935 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Nov 12 20:53:06.149129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:06.151661 kernel: libata version 3.00 loaded. Nov 12 20:53:06.155069 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (458) Nov 12 20:53:06.168384 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:53:06.178273 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:53:06.187361 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:53:06.187378 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:53:06.187537 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:53:06.187697 kernel: scsi host0: ahci Nov 12 20:53:06.187891 kernel: scsi host1: ahci Nov 12 20:53:06.188073 kernel: scsi host2: ahci Nov 12 20:53:06.188232 kernel: scsi host3: ahci Nov 12 20:53:06.188404 kernel: scsi host4: ahci Nov 12 20:53:06.188554 kernel: scsi host5: ahci Nov 12 20:53:06.188714 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 12 20:53:06.188729 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 12 20:53:06.188745 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 12 20:53:06.188755 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 12 20:53:06.188765 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 12 20:53:06.188776 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 12 20:53:06.185984 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:53:06.228119 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:53:06.228512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:06.242373 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:53:06.242532 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:53:06.259296 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:53:06.262045 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:06.289477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:06.369985 disk-uuid[566]: Primary Header is updated. Nov 12 20:53:06.369985 disk-uuid[566]: Secondary Entries is updated. Nov 12 20:53:06.369985 disk-uuid[566]: Secondary Header is updated. Nov 12 20:53:06.374143 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:53:06.378931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:53:06.499963 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:53:06.506458 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:53:06.506533 kernel: ata3.00: applying bridge limits Nov 12 20:53:06.506544 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:06.506555 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:06.506565 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:06.506575 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:06.508363 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:06.510832 kernel: ata3.00: configured for UDMA/100 Nov 12 20:53:06.510856 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:53:06.565943 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:53:06.589373 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:53:06.589425 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:53:07.391974 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:53:07.392568 disk-uuid[575]: The operation has completed successfully. Nov 12 20:53:07.423700 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:53:07.423891 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:53:07.461336 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:53:07.465552 sh[591]: Success Nov 12 20:53:07.478931 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:53:07.516417 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:53:07.548066 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:53:07.552114 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:53:07.589973 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:53:07.590043 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:07.590060 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:53:07.591159 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:53:07.592001 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:53:07.596935 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:53:07.599640 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:53:07.620050 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:53:07.621968 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:53:07.636867 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:07.636943 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:07.636960 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:53:07.640938 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:53:07.651399 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:53:07.653194 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:07.664851 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:53:07.674070 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:53:07.730361 ignition[696]: Ignition 2.19.0 Nov 12 20:53:07.730377 ignition[696]: Stage: fetch-offline Nov 12 20:53:07.730430 ignition[696]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:07.730445 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:07.730563 ignition[696]: parsed url from cmdline: "" Nov 12 20:53:07.730569 ignition[696]: no config URL provided Nov 12 20:53:07.730577 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:07.730590 ignition[696]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:07.730626 ignition[696]: op(1): [started] loading QEMU firmware config module Nov 12 20:53:07.730634 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:53:07.740581 ignition[696]: op(1): [finished] loading QEMU firmware config module Nov 12 20:53:07.748789 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:07.767061 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:07.786973 ignition[696]: parsing config with SHA512: 1c5e966c8740bb35631c71e27c821c5b4ef3e1c649be55d789f7701da7dee518a53a49a839352f77cc7d13b33969453758b43cd5dd73f212676b875694a92ed4 Nov 12 20:53:07.789994 systemd-networkd[779]: lo: Link UP Nov 12 20:53:07.790003 systemd-networkd[779]: lo: Gained carrier Nov 12 20:53:07.790565 ignition[696]: fetch-offline: fetch-offline passed Nov 12 20:53:07.790230 unknown[696]: fetched base config from "system" Nov 12 20:53:07.790623 ignition[696]: Ignition finished successfully Nov 12 20:53:07.790238 unknown[696]: fetched user config from "qemu" Nov 12 20:53:07.792003 systemd-networkd[779]: Enumeration completed Nov 12 20:53:07.792182 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:07.792607 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:07.792612 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:07.793983 systemd-networkd[779]: eth0: Link UP Nov 12 20:53:07.793988 systemd-networkd[779]: eth0: Gained carrier Nov 12 20:53:07.793997 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:07.795121 systemd[1]: Reached target network.target - Network. Nov 12 20:53:07.807999 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:07.808254 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:53:07.820145 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:53:07.821973 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:53:07.834695 ignition[782]: Ignition 2.19.0 Nov 12 20:53:07.834708 ignition[782]: Stage: kargs Nov 12 20:53:07.834982 ignition[782]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:07.835003 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:07.835960 ignition[782]: kargs: kargs passed Nov 12 20:53:07.836033 ignition[782]: Ignition finished successfully Nov 12 20:53:07.842805 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:53:07.854090 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:53:07.869012 ignition[791]: Ignition 2.19.0 Nov 12 20:53:07.869022 ignition[791]: Stage: disks Nov 12 20:53:07.869194 ignition[791]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:07.869206 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:07.893734 ignition[791]: disks: disks passed Nov 12 20:53:07.894403 ignition[791]: Ignition finished successfully Nov 12 20:53:07.897512 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:53:07.898800 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:07.900584 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:53:07.902895 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:07.903929 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:07.906100 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:07.915047 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:53:07.972999 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:53:08.247258 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:53:08.258082 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:53:08.358923 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:53:08.359382 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:53:08.361703 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:08.375124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:08.378423 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:53:08.381341 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:53:08.388523 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Nov 12 20:53:08.388554 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:08.388568 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:08.388582 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:53:08.388597 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:53:08.381415 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:53:08.381451 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:08.400350 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:08.403834 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:53:08.405824 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:53:08.449720 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:53:08.454344 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:53:08.459650 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:53:08.463938 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:53:08.557224 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:08.565072 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:53:08.566882 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:53:08.574935 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:08.588799 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:53:08.596238 ignition[922]: INFO : Ignition 2.19.0 Nov 12 20:53:08.596238 ignition[922]: INFO : Stage: mount Nov 12 20:53:08.596238 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:08.596238 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:08.597659 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:53:08.602854 ignition[922]: INFO : mount: mount passed Nov 12 20:53:08.603748 ignition[922]: INFO : Ignition finished successfully Nov 12 20:53:08.606275 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:53:08.619233 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:53:08.627228 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:08.637927 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Nov 12 20:53:08.640662 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:08.640686 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:08.640706 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:53:08.643944 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:53:08.645809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:08.673672 ignition[954]: INFO : Ignition 2.19.0 Nov 12 20:53:08.673672 ignition[954]: INFO : Stage: files Nov 12 20:53:08.676131 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:08.676131 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:08.676131 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:53:08.676131 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:53:08.676131 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:53:08.684219 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:53:08.684219 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:53:08.684219 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:53:08.684219 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:08.684219 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:53:08.679228 unknown[954]: wrote ssh authorized keys file for user: core Nov 12 20:53:08.726414 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:53:08.809620 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:08.809620 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:53:08.814233 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Nov 12 20:53:09.199528 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:53:09.436380 systemd-networkd[779]: eth0: Gained IPv6LL Nov 12 20:53:09.482415 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:53:09.482415 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:53:09.486066 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:09.488306 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:09.488306 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:53:09.488306 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 12 20:53:09.492805 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:53:09.492805 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:53:09.492805 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 12 20:53:09.492805 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:53:09.616280 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:53:09.621189 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:53:09.623050 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:53:09.623050 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:09.625988 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:09.627520 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:09.629420 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:09.631223 ignition[954]: INFO : files: files passed Nov 12 20:53:09.632058 ignition[954]: INFO : Ignition finished successfully Nov 12 20:53:09.635713 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:53:09.653329 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:53:09.657480 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:53:09.660913 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:53:09.662209 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:53:09.667580 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:53:09.670307 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:09.670307 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:09.675388 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:09.677760 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:09.681809 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:53:09.693127 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:53:09.718991 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:53:09.719181 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:53:09.721984 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:53:09.724218 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:53:09.726617 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:53:09.739094 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:53:09.753175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:09.757518 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:53:09.773281 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:09.774697 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:09.777029 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:53:09.779089 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:53:09.779220 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:09.781562 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:53:09.783182 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:53:09.785344 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:53:09.787459 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:09.789717 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:09.791993 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:53:09.794183 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:09.796524 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:53:09.798569 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:53:09.800735 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:53:09.802514 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:53:09.802701 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:09.804770 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:09.806412 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:09.808546 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:53:09.808722 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:09.810787 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:53:09.810968 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:09.813266 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:53:09.813437 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:09.815298 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:53:09.817004 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:53:09.821086 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:09.822849 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:53:09.824779 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:53:09.826581 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:53:09.826699 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:09.828606 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:53:09.828711 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:09.831096 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:53:09.831236 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:09.833158 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:53:09.833277 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:53:09.841064 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:53:09.842759 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:53:09.843804 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:53:09.844008 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:09.846072 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:53:09.846178 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:09.851658 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:53:09.851823 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:53:09.858197 ignition[1008]: INFO : Ignition 2.19.0 Nov 12 20:53:09.858197 ignition[1008]: INFO : Stage: umount Nov 12 20:53:09.859941 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:09.859941 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:09.859941 ignition[1008]: INFO : umount: umount passed Nov 12 20:53:09.859941 ignition[1008]: INFO : Ignition finished successfully Nov 12 20:53:09.861529 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:53:09.861656 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:53:09.863443 systemd[1]: Stopped target network.target - Network. Nov 12 20:53:09.865012 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:53:09.865070 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:53:09.866936 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:53:09.866988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:53:09.868982 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:53:09.869032 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:53:09.871005 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:53:09.871056 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:09.873120 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:53:09.875114 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:53:09.876991 systemd-networkd[779]: eth0: DHCPv6 lease lost Nov 12 20:53:09.878297 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:53:09.880524 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:53:09.880690 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:53:09.882210 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:53:09.882265 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:09.893022 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:53:09.895187 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:53:09.895256 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:09.897785 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:09.900651 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:53:09.900772 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:53:09.907662 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:53:09.907817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:09.910160 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:53:09.910236 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:09.911618 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:53:09.911677 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:09.915086 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:53:09.915261 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:53:09.922858 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:53:09.923102 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:09.925209 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:53:09.925276 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:09.927425 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:53:09.927472 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:09.929447 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:53:09.929498 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:09.931553 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:53:09.931604 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:09.933580 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:09.933630 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:09.952197 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:53:09.954966 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:53:09.955046 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:09.958410 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:53:09.958467 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:09.961473 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:53:09.961528 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:09.964102 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:09.964162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:09.967308 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:53:09.967431 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:53:10.041460 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:53:10.041617 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:53:10.043882 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:53:10.045037 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:53:10.045117 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:10.065076 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:53:10.074824 systemd[1]: Switching root. Nov 12 20:53:10.118720 systemd-journald[191]: Journal stopped Nov 12 20:53:11.279688 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Nov 12 20:53:11.279763 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:53:11.279781 kernel: SELinux: policy capability open_perms=1 Nov 12 20:53:11.279793 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:53:11.279804 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:53:11.279820 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:53:11.279831 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:53:11.279843 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:53:11.279859 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:53:11.279871 kernel: audit: type=1403 audit(1731444790.470:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:53:11.279884 systemd[1]: Successfully loaded SELinux policy in 40.475ms. Nov 12 20:53:11.280022 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.696ms. Nov 12 20:53:11.280041 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:11.280053 systemd[1]: Detected virtualization kvm. Nov 12 20:53:11.280069 systemd[1]: Detected architecture x86-64. Nov 12 20:53:11.280081 systemd[1]: Detected first boot. Nov 12 20:53:11.280093 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:53:11.280105 zram_generator::config[1053]: No configuration found. Nov 12 20:53:11.280119 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:53:11.280131 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:53:11.280143 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:53:11.280156 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:53:11.280171 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:53:11.280192 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:53:11.280206 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:53:11.280218 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:53:11.280231 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:53:11.280243 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:53:11.280255 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:53:11.280267 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:53:11.280283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:11.280297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:11.280312 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:53:11.280327 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:53:11.280343 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:53:11.280359 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:11.280374 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:53:11.280390 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:11.280404 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:53:11.280422 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:53:11.280437 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:11.280454 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:53:11.280469 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:11.280487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:11.280503 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:11.280517 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:11.280531 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:53:11.280550 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:53:11.280567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:11.280583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:11.280600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:11.280616 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:53:11.280633 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:53:11.280649 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:53:11.280665 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:53:11.280683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:11.280702 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:53:11.280718 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:53:11.280734 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:53:11.280750 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:53:11.280766 systemd[1]: Reached target machines.target - Containers. Nov 12 20:53:11.280782 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:53:11.280801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:11.280817 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:11.280833 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:53:11.280854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:11.280871 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:53:11.280887 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:11.280920 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:53:11.280937 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:11.280953 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:53:11.280971 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:53:11.280986 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:53:11.281007 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:53:11.281024 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:53:11.281039 kernel: fuse: init (API version 7.39) Nov 12 20:53:11.281055 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:11.281070 kernel: loop: module loaded Nov 12 20:53:11.281085 kernel: ACPI: bus type drm_connector registered Nov 12 20:53:11.281101 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:11.281117 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:53:11.281133 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:53:11.281153 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:11.281173 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:53:11.281224 systemd-journald[1124]: Collecting audit messages is disabled. Nov 12 20:53:11.281253 systemd[1]: Stopped verity-setup.service. Nov 12 20:53:11.281270 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:11.281285 systemd-journald[1124]: Journal started Nov 12 20:53:11.281318 systemd-journald[1124]: Runtime Journal (/run/log/journal/36cc87e42da3475ab02d0cd740f932c5) is 6.0M, max 48.4M, 42.3M free. Nov 12 20:53:11.042276 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:53:11.061136 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:53:11.061613 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:53:11.286754 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:11.287622 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:53:11.289000 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:53:11.290413 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:53:11.291619 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:53:11.292935 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:53:11.294240 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:53:11.295550 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:53:11.297094 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:11.298748 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:53:11.298945 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:53:11.300661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:11.300840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:11.302346 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:53:11.302518 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:53:11.303956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:11.304130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:11.305762 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:53:11.305953 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:53:11.307392 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:11.307562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:11.309103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:11.310572 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:53:11.312160 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:53:11.328471 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:53:11.340025 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:53:11.342438 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:53:11.343612 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:53:11.343646 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:11.345834 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:53:11.348470 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:53:11.353843 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:53:11.355350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:11.357554 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:53:11.360415 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:53:11.363258 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:11.364793 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:53:11.366274 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:11.370662 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:11.371866 systemd-journald[1124]: Time spent on flushing to /var/log/journal/36cc87e42da3475ab02d0cd740f932c5 is 16.832ms for 950 entries. Nov 12 20:53:11.371866 systemd-journald[1124]: System Journal (/var/log/journal/36cc87e42da3475ab02d0cd740f932c5) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:53:11.507514 systemd-journald[1124]: Received client request to flush runtime journal. Nov 12 20:53:11.507559 kernel: loop0: detected capacity change from 0 to 140768 Nov 12 20:53:11.378248 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:53:11.381800 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:11.385194 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:53:11.386545 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:53:11.388578 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:53:11.406023 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:53:11.408575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:11.477044 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:53:11.509685 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:53:11.542990 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:53:11.546519 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:53:11.571816 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:53:11.578998 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:53:11.603940 kernel: loop1: detected capacity change from 0 to 142488 Nov 12 20:53:11.606927 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Nov 12 20:53:11.606947 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Nov 12 20:53:11.610756 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:11.615576 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:11.645204 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:53:11.724943 kernel: loop2: detected capacity change from 0 to 205544 Nov 12 20:53:11.744836 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:53:11.800215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:11.810926 kernel: loop3: detected capacity change from 0 to 140768 Nov 12 20:53:11.831853 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Nov 12 20:53:11.831882 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Nov 12 20:53:11.833889 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:53:11.834794 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:53:11.840062 kernel: loop4: detected capacity change from 0 to 142488 Nov 12 20:53:11.840586 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:11.854922 kernel: loop5: detected capacity change from 0 to 205544 Nov 12 20:53:11.859631 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:53:11.860268 (sd-merge)[1191]: Merged extensions into '/usr'. Nov 12 20:53:11.882049 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:53:11.882197 systemd[1]: Reloading... Nov 12 20:53:11.970953 zram_generator::config[1216]: No configuration found. Nov 12 20:53:12.204700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:12.282341 systemd[1]: Reloading finished in 399 ms. Nov 12 20:53:12.285620 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:53:12.316666 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:53:12.318879 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:53:12.347134 systemd[1]: Starting ensure-sysext.service... Nov 12 20:53:12.349876 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:12.357973 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:53:12.357990 systemd[1]: Reloading... Nov 12 20:53:12.381166 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:53:12.381545 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:53:12.382550 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:53:12.382851 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Nov 12 20:53:12.382959 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Nov 12 20:53:12.386404 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:53:12.386417 systemd-tmpfiles[1258]: Skipping /boot Nov 12 20:53:12.429154 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:53:12.429183 systemd-tmpfiles[1258]: Skipping /boot Nov 12 20:53:12.436931 zram_generator::config[1283]: No configuration found. Nov 12 20:53:12.572771 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:12.636073 systemd[1]: Reloading finished in 277 ms. Nov 12 20:53:12.654645 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:53:12.666676 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:12.677237 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:12.680322 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:53:12.683598 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:53:12.689463 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:12.693543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:12.699251 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:53:12.704883 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:12.705097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:12.709188 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:12.714188 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:12.717546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:12.717951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:12.723024 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:53:12.726335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:12.727478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:12.727674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:12.730699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:12.730893 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:12.735113 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:12.735317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:12.743737 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:53:12.748543 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:53:12.748591 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Nov 12 20:53:12.756484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:12.756718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:12.757466 augenrules[1353]: No rules Nov 12 20:53:12.765160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:12.769204 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:53:12.773146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:12.775986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:12.777633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:12.779808 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:53:12.781406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:12.782367 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:53:12.785241 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:12.787962 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:53:12.790156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:12.790336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:12.797021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:12.799365 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:53:12.799558 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:53:12.816220 systemd[1]: Finished ensure-sysext.service. Nov 12 20:53:12.842136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:12.842350 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:12.847325 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:12.847526 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:12.855295 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:53:12.865199 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:12.866446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:12.866542 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:12.880895 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:53:12.882197 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:53:12.891831 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:53:12.958490 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1386) Nov 12 20:53:12.969957 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1376) Nov 12 20:53:12.971879 systemd-resolved[1328]: Positive Trust Anchors: Nov 12 20:53:12.972283 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:12.972402 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:12.972929 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1376) Nov 12 20:53:12.978843 systemd-resolved[1328]: Defaulting to hostname 'linux'. Nov 12 20:53:12.983941 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:53:12.985208 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:12.988071 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:12.999168 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:53:13.046868 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:53:13.064185 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:53:13.064458 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:53:13.064648 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:53:13.062630 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:53:13.065613 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:53:13.067573 systemd-networkd[1395]: lo: Link UP Nov 12 20:53:13.067582 systemd-networkd[1395]: lo: Gained carrier Nov 12 20:53:13.074627 systemd-networkd[1395]: Enumeration completed Nov 12 20:53:13.077162 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:53:13.079359 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:13.079368 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:13.079418 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:13.081316 systemd-networkd[1395]: eth0: Link UP Nov 12 20:53:13.081324 systemd-networkd[1395]: eth0: Gained carrier Nov 12 20:53:13.081340 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:13.081455 systemd[1]: Reached target network.target - Network. Nov 12 20:53:13.092303 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:53:13.096246 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 12 20:53:13.142231 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:53:13.097071 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:53:13.097112 systemd-timesyncd[1397]: Initial clock synchronization to Tue 2024-11-12 20:53:13.242036 UTC. Nov 12 20:53:13.193227 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:53:13.200108 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:53:13.202857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:13.255335 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:53:13.259179 kernel: kvm_amd: TSC scaling supported Nov 12 20:53:13.259231 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:53:13.259247 kernel: kvm_amd: Nested Paging enabled Nov 12 20:53:13.260076 kernel: kvm_amd: LBR virtualization supported Nov 12 20:53:13.262192 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:53:13.262240 kernel: kvm_amd: Virtual GIF supported Nov 12 20:53:13.283927 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:53:13.317505 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:53:13.350348 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:53:13.363374 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:53:13.406117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:13.422004 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:53:13.457624 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:13.458799 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:13.460004 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:53:13.461341 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:53:13.462873 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:53:13.464127 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:53:13.465401 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:53:13.466679 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:53:13.466710 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:13.467614 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:13.469185 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:53:13.472362 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:53:13.482675 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:53:13.485316 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:53:13.486975 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:53:13.488513 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:13.489798 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:13.504973 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:53:13.505006 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:53:13.506310 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:53:13.508831 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:53:13.510860 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:53:13.512611 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:53:13.517986 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:53:13.519325 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:53:13.523843 jq[1430]: false Nov 12 20:53:13.523081 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:53:13.526394 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:53:13.530511 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:53:13.536369 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:53:13.541203 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:53:13.541655 extend-filesystems[1431]: Found loop3 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found loop4 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found loop5 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found sr0 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found vda Nov 12 20:53:13.543655 extend-filesystems[1431]: Found vda1 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found vda2 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found vda3 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found usr Nov 12 20:53:13.543655 extend-filesystems[1431]: Found vda4 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found vda6 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found vda7 Nov 12 20:53:13.543655 extend-filesystems[1431]: Found vda9 Nov 12 20:53:13.543655 extend-filesystems[1431]: Checking size of /dev/vda9 Nov 12 20:53:13.625021 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1378) Nov 12 20:53:13.542136 dbus-daemon[1429]: [system] SELinux support is enabled Nov 12 20:53:13.543295 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:53:13.627704 extend-filesystems[1431]: Resized partition /dev/vda9 Nov 12 20:53:13.543802 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:53:13.633156 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:53:13.548146 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:53:13.637166 jq[1443]: true Nov 12 20:53:13.552104 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:53:13.637379 update_engine[1441]: I20241112 20:53:13.580933 1441 main.cc:92] Flatcar Update Engine starting Nov 12 20:53:13.637379 update_engine[1441]: I20241112 20:53:13.582357 1441 update_check_scheduler.cc:74] Next update check in 11m4s Nov 12 20:53:13.557784 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:53:13.562268 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:53:13.570310 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:53:13.570598 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:53:13.571083 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:53:13.571306 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:53:13.588327 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:53:13.588546 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:53:13.641195 jq[1455]: true Nov 12 20:53:13.648370 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:53:13.653303 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:53:13.684309 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:53:13.684434 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:53:13.686161 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:53:13.686179 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:53:13.693042 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:53:13.723118 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:53:13.728589 tar[1453]: linux-amd64/helm Nov 12 20:53:13.767823 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:53:13.850395 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:53:13.855103 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:53:13.855141 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:53:13.855932 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:53:13.856592 systemd-logind[1438]: New seat seat0. Nov 12 20:53:13.860152 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:53:13.893630 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:53:13.917977 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:53:13.918232 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:53:13.932007 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:53:13.934922 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:51860.service - OpenSSH per-connection server daemon (10.0.0.1:51860). Nov 12 20:53:13.940010 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:53:13.983933 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:53:14.000327 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:53:14.010337 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:53:14.013953 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:53:14.016388 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:53:14.026082 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:53:14.026082 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:53:14.026082 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:53:14.038623 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Nov 12 20:53:14.030188 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:53:14.030451 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:53:14.065235 sshd[1501]: Connection closed by authenticating user core 10.0.0.1 port 51860 [preauth] Nov 12 20:53:14.068487 systemd[1]: sshd@0-10.0.0.136:22-10.0.0.1:51860.service: Deactivated successfully. Nov 12 20:53:14.072368 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:53:14.077307 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:53:14.083847 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:53:14.337502 containerd[1456]: time="2024-11-12T20:53:14.337274937Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:53:14.370513 containerd[1456]: time="2024-11-12T20:53:14.370425323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:14.373156 containerd[1456]: time="2024-11-12T20:53:14.373098473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:14.373156 containerd[1456]: time="2024-11-12T20:53:14.373140015Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:53:14.373230 containerd[1456]: time="2024-11-12T20:53:14.373162467Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:53:14.373503 containerd[1456]: time="2024-11-12T20:53:14.373469872Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:53:14.373562 containerd[1456]: time="2024-11-12T20:53:14.373501515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:14.373673 containerd[1456]: time="2024-11-12T20:53:14.373631363Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:14.373673 containerd[1456]: time="2024-11-12T20:53:14.373664462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:14.374092 containerd[1456]: time="2024-11-12T20:53:14.374054280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:14.374092 containerd[1456]: time="2024-11-12T20:53:14.374086034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:14.374169 containerd[1456]: time="2024-11-12T20:53:14.374127837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:14.374201 containerd[1456]: time="2024-11-12T20:53:14.374164799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:14.374421 containerd[1456]: time="2024-11-12T20:53:14.374390314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:14.374859 containerd[1456]: time="2024-11-12T20:53:14.374828991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:14.375077 containerd[1456]: time="2024-11-12T20:53:14.375046358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:14.375106 containerd[1456]: time="2024-11-12T20:53:14.375072795Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:53:14.375263 containerd[1456]: time="2024-11-12T20:53:14.375237037Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:53:14.375362 containerd[1456]: time="2024-11-12T20:53:14.375337486Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:53:14.387813 containerd[1456]: time="2024-11-12T20:53:14.387768945Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:53:14.387930 containerd[1456]: time="2024-11-12T20:53:14.387828238Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:53:14.387930 containerd[1456]: time="2024-11-12T20:53:14.387863935Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:53:14.387930 containerd[1456]: time="2024-11-12T20:53:14.387888472Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:53:14.387930 containerd[1456]: time="2024-11-12T20:53:14.387919215Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:53:14.388165 containerd[1456]: time="2024-11-12T20:53:14.388136026Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:53:14.388574 containerd[1456]: time="2024-11-12T20:53:14.388528805Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.388992394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389048229Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389076283Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389114760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389133028Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389152025Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389168958Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389185689Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389204008Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389217423Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389233023Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389263655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389284886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.390959 containerd[1456]: time="2024-11-12T20:53:14.389298776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389316579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389334028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389352427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389367188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389383444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389477595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389517336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389531632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389591066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389618554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389636681Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389677948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389732247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391452 containerd[1456]: time="2024-11-12T20:53:14.389773848Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:53:14.391819 containerd[1456]: time="2024-11-12T20:53:14.389884347Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:53:14.391819 containerd[1456]: time="2024-11-12T20:53:14.389931811Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:53:14.391819 containerd[1456]: time="2024-11-12T20:53:14.389945934Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:53:14.391819 containerd[1456]: time="2024-11-12T20:53:14.389962222Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:53:14.391819 containerd[1456]: time="2024-11-12T20:53:14.389974514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.391819 containerd[1456]: time="2024-11-12T20:53:14.389992711Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:53:14.391819 containerd[1456]: time="2024-11-12T20:53:14.390006248Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:53:14.391819 containerd[1456]: time="2024-11-12T20:53:14.390017743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:53:14.392169 containerd[1456]: time="2024-11-12T20:53:14.390450497Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:53:14.392169 containerd[1456]: time="2024-11-12T20:53:14.390576473Z" level=info msg="Connect containerd service" Nov 12 20:53:14.392169 containerd[1456]: time="2024-11-12T20:53:14.390649322Z" level=info msg="using legacy CRI server" Nov 12 20:53:14.392169 containerd[1456]: time="2024-11-12T20:53:14.390666236Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:53:14.392169 containerd[1456]: time="2024-11-12T20:53:14.390856287Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:53:14.398338 containerd[1456]: time="2024-11-12T20:53:14.397936270Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:53:14.398338 containerd[1456]: time="2024-11-12T20:53:14.398193408Z" level=info msg="Start subscribing containerd event" Nov 12 20:53:14.401106 containerd[1456]: time="2024-11-12T20:53:14.399143987Z" level=info msg="Start recovering state" Nov 12 20:53:14.401106 containerd[1456]: time="2024-11-12T20:53:14.399187762Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:53:14.401106 containerd[1456]: time="2024-11-12T20:53:14.399293509Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:53:14.401106 containerd[1456]: time="2024-11-12T20:53:14.399317933Z" level=info msg="Start event monitor" Nov 12 20:53:14.401106 containerd[1456]: time="2024-11-12T20:53:14.399401975Z" level=info msg="Start snapshots syncer" Nov 12 20:53:14.401106 containerd[1456]: time="2024-11-12T20:53:14.399417948Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:53:14.401106 containerd[1456]: time="2024-11-12T20:53:14.399426259Z" level=info msg="Start streaming server" Nov 12 20:53:14.401106 containerd[1456]: time="2024-11-12T20:53:14.399572666Z" level=info msg="containerd successfully booted in 0.066550s" Nov 12 20:53:14.399756 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:53:14.471634 tar[1453]: linux-amd64/LICENSE Nov 12 20:53:14.471781 tar[1453]: linux-amd64/README.md Nov 12 20:53:14.496128 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:53:14.876583 systemd-networkd[1395]: eth0: Gained IPv6LL Nov 12 20:53:14.881323 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:53:14.883456 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:53:14.894398 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:53:14.897697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:14.900593 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:53:14.921355 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:53:14.921710 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:53:14.923694 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:53:14.931112 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:53:16.371069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:16.373783 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:53:16.375786 systemd[1]: Startup finished in 829ms (kernel) + 5.764s (initrd) + 5.944s (userspace) = 12.538s. Nov 12 20:53:16.403491 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:17.100357 kubelet[1546]: E1112 20:53:17.100243 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:17.141043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:17.141272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:17.141731 systemd[1]: kubelet.service: Consumed 2.049s CPU time. Nov 12 20:53:24.146301 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:34562.service - OpenSSH per-connection server daemon (10.0.0.1:34562). Nov 12 20:53:24.183744 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 34562 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:24.186369 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:24.195994 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:53:24.208203 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:53:24.210509 systemd-logind[1438]: New session 1 of user core. Nov 12 20:53:24.222265 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:53:24.246451 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:53:24.250982 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:53:24.404168 systemd[1563]: Queued start job for default target default.target. Nov 12 20:53:24.414873 systemd[1563]: Created slice app.slice - User Application Slice. Nov 12 20:53:24.414931 systemd[1563]: Reached target paths.target - Paths. Nov 12 20:53:24.414952 systemd[1563]: Reached target timers.target - Timers. Nov 12 20:53:24.417266 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:53:24.430023 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:53:24.430218 systemd[1563]: Reached target sockets.target - Sockets. Nov 12 20:53:24.430242 systemd[1563]: Reached target basic.target - Basic System. Nov 12 20:53:24.430307 systemd[1563]: Reached target default.target - Main User Target. Nov 12 20:53:24.430354 systemd[1563]: Startup finished in 169ms. Nov 12 20:53:24.430891 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:53:24.432722 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:53:24.503881 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:34572.service - OpenSSH per-connection server daemon (10.0.0.1:34572). Nov 12 20:53:24.541323 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 34572 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:24.542848 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:24.546656 systemd-logind[1438]: New session 2 of user core. Nov 12 20:53:24.553047 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:53:24.607454 sshd[1574]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:24.623654 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:34572.service: Deactivated successfully. Nov 12 20:53:24.625595 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:53:24.627281 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:53:24.641159 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:34586.service - OpenSSH per-connection server daemon (10.0.0.1:34586). Nov 12 20:53:24.642053 systemd-logind[1438]: Removed session 2. Nov 12 20:53:24.671423 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 34586 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:24.672900 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:24.677019 systemd-logind[1438]: New session 3 of user core. Nov 12 20:53:24.687016 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:53:24.737043 sshd[1581]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:24.747430 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:34586.service: Deactivated successfully. Nov 12 20:53:24.749310 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:53:24.750736 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:53:24.752074 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:34598.service - OpenSSH per-connection server daemon (10.0.0.1:34598). Nov 12 20:53:24.752946 systemd-logind[1438]: Removed session 3. Nov 12 20:53:24.786800 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 34598 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:24.788381 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:24.792539 systemd-logind[1438]: New session 4 of user core. Nov 12 20:53:24.803040 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:53:24.856922 sshd[1588]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:24.875459 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:34598.service: Deactivated successfully. Nov 12 20:53:24.878169 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:53:24.880359 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:53:24.888308 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:34610.service - OpenSSH per-connection server daemon (10.0.0.1:34610). Nov 12 20:53:24.889447 systemd-logind[1438]: Removed session 4. Nov 12 20:53:24.919166 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 34610 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:24.920574 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:24.925230 systemd-logind[1438]: New session 5 of user core. Nov 12 20:53:24.946313 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:53:25.009239 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:53:25.009602 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:25.032080 sudo[1598]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:25.034291 sshd[1595]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:25.051115 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:34610.service: Deactivated successfully. Nov 12 20:53:25.053536 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:53:25.055281 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:53:25.064321 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:34626.service - OpenSSH per-connection server daemon (10.0.0.1:34626). Nov 12 20:53:25.065395 systemd-logind[1438]: Removed session 5. Nov 12 20:53:25.096346 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 34626 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:25.098029 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:25.102967 systemd-logind[1438]: New session 6 of user core. Nov 12 20:53:25.112086 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:53:25.169188 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:53:25.169532 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:25.174304 sudo[1607]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:25.181848 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:53:25.182233 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:25.203254 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:25.205186 auditctl[1610]: No rules Nov 12 20:53:25.206048 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:53:25.206464 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:25.210719 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:25.247099 augenrules[1628]: No rules Nov 12 20:53:25.249238 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:25.250636 sudo[1606]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:25.253069 sshd[1603]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:25.266400 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:34626.service: Deactivated successfully. Nov 12 20:53:25.268457 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:53:25.270217 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:53:25.277307 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:34636.service - OpenSSH per-connection server daemon (10.0.0.1:34636). Nov 12 20:53:25.278224 systemd-logind[1438]: Removed session 6. Nov 12 20:53:25.314780 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 34636 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:25.316862 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:25.320895 systemd-logind[1438]: New session 7 of user core. Nov 12 20:53:25.335075 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:53:25.390545 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:53:25.390900 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:26.392460 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:53:26.392579 (dockerd)[1658]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:53:27.318685 dockerd[1658]: time="2024-11-12T20:53:27.317814125Z" level=info msg="Starting up" Nov 12 20:53:27.319969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:53:27.328409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:27.674457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:27.679638 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:27.775468 kubelet[1690]: E1112 20:53:27.775385 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:27.782379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:27.782622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:28.366350 dockerd[1658]: time="2024-11-12T20:53:28.366295617Z" level=info msg="Loading containers: start." Nov 12 20:53:28.868931 kernel: Initializing XFRM netlink socket Nov 12 20:53:28.957083 systemd-networkd[1395]: docker0: Link UP Nov 12 20:53:29.164054 dockerd[1658]: time="2024-11-12T20:53:29.163878997Z" level=info msg="Loading containers: done." Nov 12 20:53:29.178933 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck179316988-merged.mount: Deactivated successfully. Nov 12 20:53:29.193785 dockerd[1658]: time="2024-11-12T20:53:29.193715714Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:53:29.193914 dockerd[1658]: time="2024-11-12T20:53:29.193868243Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:53:29.194119 dockerd[1658]: time="2024-11-12T20:53:29.194083675Z" level=info msg="Daemon has completed initialization" Nov 12 20:53:29.241178 dockerd[1658]: time="2024-11-12T20:53:29.241063558Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:53:29.241350 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:53:29.936583 containerd[1456]: time="2024-11-12T20:53:29.936512192Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 12 20:53:35.612224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002428867.mount: Deactivated successfully. Nov 12 20:53:36.832391 containerd[1456]: time="2024-11-12T20:53:36.832309002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:36.833670 containerd[1456]: time="2024-11-12T20:53:36.833610094Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=27975588" Nov 12 20:53:36.835399 containerd[1456]: time="2024-11-12T20:53:36.835355881Z" level=info msg="ImageCreate event name:\"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:36.839356 containerd[1456]: time="2024-11-12T20:53:36.839257762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:36.840840 containerd[1456]: time="2024-11-12T20:53:36.840752459Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"27972388\" in 6.904165999s" Nov 12 20:53:36.840840 containerd[1456]: time="2024-11-12T20:53:36.840826400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\"" Nov 12 20:53:36.842734 containerd[1456]: time="2024-11-12T20:53:36.842689444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 12 20:53:38.033173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:53:38.046112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:38.197728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:38.202552 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:38.423640 kubelet[1887]: E1112 20:53:38.423425 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:38.428189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:38.428598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:39.269074 containerd[1456]: time="2024-11-12T20:53:39.269017459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:39.273156 containerd[1456]: time="2024-11-12T20:53:39.273107986Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=24701922" Nov 12 20:53:39.281965 containerd[1456]: time="2024-11-12T20:53:39.281897217Z" level=info msg="ImageCreate event name:\"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:39.289346 containerd[1456]: time="2024-11-12T20:53:39.289312737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:39.290593 containerd[1456]: time="2024-11-12T20:53:39.290548731Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"26147288\" in 2.447808385s" Nov 12 20:53:39.290593 containerd[1456]: time="2024-11-12T20:53:39.290589189Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\"" Nov 12 20:53:39.291496 containerd[1456]: time="2024-11-12T20:53:39.291471603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 12 20:53:42.239819 containerd[1456]: time="2024-11-12T20:53:42.239745176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:42.241238 containerd[1456]: time="2024-11-12T20:53:42.241194164Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=18657606" Nov 12 20:53:42.242671 containerd[1456]: time="2024-11-12T20:53:42.242603229Z" level=info msg="ImageCreate event name:\"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:42.245674 containerd[1456]: time="2024-11-12T20:53:42.245636911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:42.247006 containerd[1456]: time="2024-11-12T20:53:42.246969008Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"20102990\" in 2.955465047s" Nov 12 20:53:42.247006 containerd[1456]: time="2024-11-12T20:53:42.247004326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\"" Nov 12 20:53:42.247627 containerd[1456]: time="2024-11-12T20:53:42.247582870Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 12 20:53:43.804192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250770655.mount: Deactivated successfully. Nov 12 20:53:45.550305 containerd[1456]: time="2024-11-12T20:53:45.550217253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:45.599674 containerd[1456]: time="2024-11-12T20:53:45.599554896Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=30226814" Nov 12 20:53:45.649463 containerd[1456]: time="2024-11-12T20:53:45.649379661Z" level=info msg="ImageCreate event name:\"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:45.722827 containerd[1456]: time="2024-11-12T20:53:45.722726245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:45.723608 containerd[1456]: time="2024-11-12T20:53:45.723540157Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"30225833\" in 3.47591841s" Nov 12 20:53:45.723689 containerd[1456]: time="2024-11-12T20:53:45.723605932Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\"" Nov 12 20:53:45.724321 containerd[1456]: time="2024-11-12T20:53:45.724239916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:53:47.613002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245405525.mount: Deactivated successfully. Nov 12 20:53:48.385400 containerd[1456]: time="2024-11-12T20:53:48.385310338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:48.386278 containerd[1456]: time="2024-11-12T20:53:48.386185121Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:53:48.387471 containerd[1456]: time="2024-11-12T20:53:48.387427625Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:48.392210 containerd[1456]: time="2024-11-12T20:53:48.392165189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:48.393592 containerd[1456]: time="2024-11-12T20:53:48.393527368Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.669249146s" Nov 12 20:53:48.393592 containerd[1456]: time="2024-11-12T20:53:48.393584784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:53:48.394328 containerd[1456]: time="2024-11-12T20:53:48.394276970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 12 20:53:48.678738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:53:48.687169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:48.838607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:48.844682 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:48.882263 kubelet[1964]: E1112 20:53:48.882161 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:48.886675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:48.886972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:50.740418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771505735.mount: Deactivated successfully. Nov 12 20:53:50.752167 containerd[1456]: time="2024-11-12T20:53:50.752060959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:50.752878 containerd[1456]: time="2024-11-12T20:53:50.752818663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 12 20:53:50.754128 containerd[1456]: time="2024-11-12T20:53:50.754083135Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:50.756879 containerd[1456]: time="2024-11-12T20:53:50.756804111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:50.757605 containerd[1456]: time="2024-11-12T20:53:50.757533838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.363213323s" Nov 12 20:53:50.757605 containerd[1456]: time="2024-11-12T20:53:50.757595739Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 12 20:53:50.758370 containerd[1456]: time="2024-11-12T20:53:50.758337447Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 12 20:53:51.288765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089018808.mount: Deactivated successfully. Nov 12 20:53:53.979383 containerd[1456]: time="2024-11-12T20:53:53.979290049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:54.638001 containerd[1456]: time="2024-11-12T20:53:54.637867101Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779650" Nov 12 20:53:54.702434 containerd[1456]: time="2024-11-12T20:53:54.702338369Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:54.737765 containerd[1456]: time="2024-11-12T20:53:54.737673792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:54.739211 containerd[1456]: time="2024-11-12T20:53:54.739177814Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.980800709s" Nov 12 20:53:54.739279 containerd[1456]: time="2024-11-12T20:53:54.739226008Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Nov 12 20:53:57.215943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:57.231262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:57.264145 systemd[1]: Reloading requested from client PID 2060 ('systemctl') (unit session-7.scope)... Nov 12 20:53:57.264168 systemd[1]: Reloading... Nov 12 20:53:57.367936 zram_generator::config[2100]: No configuration found. Nov 12 20:53:58.107803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:58.201970 systemd[1]: Reloading finished in 937 ms. Nov 12 20:53:58.260447 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:58.264466 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:53:58.264735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:58.266593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:58.426954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:58.447417 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:53:58.494473 kubelet[2149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:53:58.494473 kubelet[2149]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:53:58.494473 kubelet[2149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:53:58.497322 kubelet[2149]: I1112 20:53:58.497268 2149 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:53:58.857397 update_engine[1441]: I20241112 20:53:58.857206 1441 update_attempter.cc:509] Updating boot flags... Nov 12 20:53:58.858831 kubelet[2149]: I1112 20:53:58.858786 2149 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:53:58.858831 kubelet[2149]: I1112 20:53:58.858819 2149 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:53:58.859135 kubelet[2149]: I1112 20:53:58.859111 2149 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:53:59.457934 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2164) Nov 12 20:53:59.473922 kubelet[2149]: I1112 20:53:59.473596 2149 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:53:59.473922 kubelet[2149]: E1112 20:53:59.473793 2149 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:59.491451 kubelet[2149]: E1112 20:53:59.490662 2149 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:53:59.491451 kubelet[2149]: I1112 20:53:59.490731 2149 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:53:59.499650 kubelet[2149]: I1112 20:53:59.499601 2149 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:53:59.538697 kubelet[2149]: I1112 20:53:59.538627 2149 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:53:59.539070 kubelet[2149]: I1112 20:53:59.539008 2149 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:53:59.539319 kubelet[2149]: I1112 20:53:59.539067 2149 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:53:59.539433 kubelet[2149]: I1112 20:53:59.539334 2149 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:53:59.539433 kubelet[2149]: I1112 20:53:59.539346 2149 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:53:59.539618 kubelet[2149]: I1112 20:53:59.539586 2149 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:53:59.543855 kubelet[2149]: I1112 20:53:59.543810 2149 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:53:59.543929 kubelet[2149]: I1112 20:53:59.543872 2149 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:53:59.544930 kubelet[2149]: I1112 20:53:59.543966 2149 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:53:59.544930 kubelet[2149]: I1112 20:53:59.544008 2149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:53:59.554362 kubelet[2149]: I1112 20:53:59.553579 2149 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:53:59.556165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2166) Nov 12 20:53:59.557280 kubelet[2149]: I1112 20:53:59.557247 2149 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:53:59.558266 kubelet[2149]: W1112 20:53:59.558200 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Nov 12 20:53:59.558315 kubelet[2149]: E1112 20:53:59.558280 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:59.558787 kubelet[2149]: W1112 20:53:59.558757 2149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:53:59.559584 kubelet[2149]: I1112 20:53:59.559556 2149 server.go:1269] "Started kubelet" Nov 12 20:53:59.563544 kubelet[2149]: I1112 20:53:59.563366 2149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:53:59.564969 kubelet[2149]: I1112 20:53:59.564937 2149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:53:59.566025 kubelet[2149]: W1112 20:53:59.565591 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Nov 12 20:53:59.566144 kubelet[2149]: E1112 20:53:59.566127 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:59.566198 kubelet[2149]: I1112 20:53:59.565245 2149 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:53:59.567294 kubelet[2149]: I1112 20:53:59.565225 2149 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:53:59.567402 kubelet[2149]: I1112 20:53:59.566974 2149 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:53:59.572218 kubelet[2149]: I1112 20:53:59.571828 2149 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:53:59.572700 kubelet[2149]: I1112 20:53:59.572664 2149 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:53:59.574835 kubelet[2149]: E1112 20:53:59.573559 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:59.574835 kubelet[2149]: E1112 20:53:59.572834 2149 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:53:59.574835 kubelet[2149]: I1112 20:53:59.572941 2149 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:53:59.574835 kubelet[2149]: W1112 20:53:59.573208 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Nov 12 20:53:59.574835 kubelet[2149]: E1112 20:53:59.573612 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:59.574835 kubelet[2149]: I1112 20:53:59.573368 2149 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:53:59.574835 kubelet[2149]: I1112 20:53:59.573698 2149 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:53:59.574835 kubelet[2149]: E1112 20:53:59.574112 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" Nov 12 20:53:59.574835 kubelet[2149]: I1112 20:53:59.574199 2149 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:53:59.575191 kubelet[2149]: E1112 20:53:59.567780 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180753e87221086e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:53:59.559530606 +0000 UTC m=+1.107459822,LastTimestamp:2024-11-12 20:53:59.559530606 +0000 UTC m=+1.107459822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:53:59.577450 kubelet[2149]: I1112 20:53:59.577109 2149 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:53:59.619015 kubelet[2149]: I1112 20:53:59.618969 2149 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:53:59.619015 kubelet[2149]: I1112 20:53:59.619018 2149 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:53:59.619178 kubelet[2149]: I1112 20:53:59.619039 2149 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:53:59.620403 kubelet[2149]: I1112 20:53:59.620185 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:53:59.621653 kubelet[2149]: I1112 20:53:59.621621 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:53:59.622162 kubelet[2149]: I1112 20:53:59.622148 2149 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:53:59.622693 kubelet[2149]: I1112 20:53:59.622239 2149 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:53:59.622693 kubelet[2149]: E1112 20:53:59.622288 2149 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:53:59.623179 kubelet[2149]: W1112 20:53:59.623151 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Nov 12 20:53:59.623314 kubelet[2149]: E1112 20:53:59.623297 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:59.630974 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2166) Nov 12 20:53:59.674394 kubelet[2149]: E1112 20:53:59.674311 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:59.709290 kubelet[2149]: I1112 20:53:59.708801 2149 policy_none.go:49] "None policy: Start" Nov 12 20:53:59.710402 kubelet[2149]: I1112 20:53:59.710351 2149 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:53:59.710549 kubelet[2149]: I1112 20:53:59.710418 2149 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:53:59.723162 kubelet[2149]: E1112 20:53:59.723121 2149 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:53:59.726000 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:53:59.738818 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:53:59.742310 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:53:59.750054 kubelet[2149]: I1112 20:53:59.750013 2149 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:53:59.750358 kubelet[2149]: I1112 20:53:59.750333 2149 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:53:59.750430 kubelet[2149]: I1112 20:53:59.750378 2149 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:53:59.750806 kubelet[2149]: I1112 20:53:59.750678 2149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:53:59.751853 kubelet[2149]: E1112 20:53:59.751827 2149 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:53:59.775043 kubelet[2149]: E1112 20:53:59.774960 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" Nov 12 20:53:59.852350 kubelet[2149]: I1112 20:53:59.852318 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:53:59.852875 kubelet[2149]: E1112 20:53:59.852848 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Nov 12 20:53:59.932984 systemd[1]: Created slice kubepods-burstable-pod9670d5142020e87ec5bb7dbda8890348.slice - libcontainer container kubepods-burstable-pod9670d5142020e87ec5bb7dbda8890348.slice. Nov 12 20:53:59.947087 systemd[1]: Created slice kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice - libcontainer container kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice. Nov 12 20:53:59.961590 systemd[1]: Created slice kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice - libcontainer container kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice. Nov 12 20:53:59.976334 kubelet[2149]: I1112 20:53:59.976283 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9670d5142020e87ec5bb7dbda8890348-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9670d5142020e87ec5bb7dbda8890348\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:59.976471 kubelet[2149]: I1112 20:53:59.976346 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:59.979373 kubelet[2149]: I1112 20:53:59.979313 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:59.979438 kubelet[2149]: I1112 20:53:59.979373 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:59.979438 kubelet[2149]: I1112 20:53:59.979410 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:59.979438 kubelet[2149]: I1112 20:53:59.979436 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:53:59.979543 kubelet[2149]: I1112 20:53:59.979453 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9670d5142020e87ec5bb7dbda8890348-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9670d5142020e87ec5bb7dbda8890348\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:59.979543 kubelet[2149]: I1112 20:53:59.979476 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9670d5142020e87ec5bb7dbda8890348-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9670d5142020e87ec5bb7dbda8890348\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:59.979543 kubelet[2149]: I1112 20:53:59.979496 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:00.054710 kubelet[2149]: I1112 20:54:00.054665 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:54:00.055165 kubelet[2149]: E1112 20:54:00.055115 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Nov 12 20:54:00.128382 kubelet[2149]: E1112 20:54:00.128263 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180753e87221086e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:53:59.559530606 +0000 UTC m=+1.107459822,LastTimestamp:2024-11-12 20:53:59.559530606 +0000 UTC m=+1.107459822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:54:00.176136 kubelet[2149]: E1112 20:54:00.176072 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" Nov 12 20:54:00.245724 kubelet[2149]: E1112 20:54:00.245576 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:00.246530 containerd[1456]: time="2024-11-12T20:54:00.246480990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9670d5142020e87ec5bb7dbda8890348,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:00.259756 kubelet[2149]: E1112 20:54:00.259702 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:00.260439 containerd[1456]: time="2024-11-12T20:54:00.260393002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:00.264701 kubelet[2149]: E1112 20:54:00.264667 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:00.265118 containerd[1456]: time="2024-11-12T20:54:00.265076568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:00.438854 kubelet[2149]: W1112 20:54:00.438791 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Nov 12 20:54:00.438854 kubelet[2149]: E1112 20:54:00.438855 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:00.457580 kubelet[2149]: I1112 20:54:00.457533 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:54:00.458189 kubelet[2149]: E1112 20:54:00.458151 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Nov 12 20:54:00.652417 kubelet[2149]: W1112 20:54:00.652221 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Nov 12 20:54:00.652417 kubelet[2149]: E1112 20:54:00.652317 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:00.955311 kubelet[2149]: W1112 20:54:00.955225 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Nov 12 20:54:00.955311 kubelet[2149]: E1112 20:54:00.955311 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:00.977372 kubelet[2149]: E1112 20:54:00.977325 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="1.6s" Nov 12 20:54:01.143319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117421962.mount: Deactivated successfully. Nov 12 20:54:01.149732 containerd[1456]: time="2024-11-12T20:54:01.149686118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:01.150795 containerd[1456]: time="2024-11-12T20:54:01.150740205Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:01.151654 containerd[1456]: time="2024-11-12T20:54:01.151567578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:54:01.152672 containerd[1456]: time="2024-11-12T20:54:01.152599123Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:01.153502 containerd[1456]: time="2024-11-12T20:54:01.153436018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:54:01.154301 containerd[1456]: time="2024-11-12T20:54:01.154221906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:54:01.155218 containerd[1456]: time="2024-11-12T20:54:01.155175602Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:01.159214 containerd[1456]: time="2024-11-12T20:54:01.159155053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:01.160124 containerd[1456]: time="2024-11-12T20:54:01.160085454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 913.50391ms" Nov 12 20:54:01.162861 containerd[1456]: time="2024-11-12T20:54:01.162814565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 897.657018ms" Nov 12 20:54:01.164560 containerd[1456]: time="2024-11-12T20:54:01.164268498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 903.781526ms" Nov 12 20:54:01.174243 kubelet[2149]: W1112 20:54:01.174155 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Nov 12 20:54:01.174243 kubelet[2149]: E1112 20:54:01.174249 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:01.260661 kubelet[2149]: I1112 20:54:01.260499 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:54:01.260929 kubelet[2149]: E1112 20:54:01.260868 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Nov 12 20:54:01.398522 containerd[1456]: time="2024-11-12T20:54:01.398370471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:01.398522 containerd[1456]: time="2024-11-12T20:54:01.398426009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:01.398522 containerd[1456]: time="2024-11-12T20:54:01.398462554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:01.399324 containerd[1456]: time="2024-11-12T20:54:01.398559298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:01.402477 containerd[1456]: time="2024-11-12T20:54:01.402296449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:01.402477 containerd[1456]: time="2024-11-12T20:54:01.402380053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:01.402477 containerd[1456]: time="2024-11-12T20:54:01.402440362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:01.402654 containerd[1456]: time="2024-11-12T20:54:01.402555348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:01.402654 containerd[1456]: time="2024-11-12T20:54:01.402069434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:01.402654 containerd[1456]: time="2024-11-12T20:54:01.402116714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:01.402654 containerd[1456]: time="2024-11-12T20:54:01.402161879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:01.402654 containerd[1456]: time="2024-11-12T20:54:01.402258302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:01.522208 systemd[1]: Started cri-containerd-6a50d648024a329878d7d5ac32f66b749577947dc186adf468fb92980ac8a0a7.scope - libcontainer container 6a50d648024a329878d7d5ac32f66b749577947dc186adf468fb92980ac8a0a7. Nov 12 20:54:01.526208 systemd[1]: Started cri-containerd-a23684de7490a7fc0d997deb1c41afec244f4491602c3c2026170bb8145927c3.scope - libcontainer container a23684de7490a7fc0d997deb1c41afec244f4491602c3c2026170bb8145927c3. Nov 12 20:54:01.527449 kubelet[2149]: E1112 20:54:01.527421 2149 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:54:01.545288 systemd[1]: Started cri-containerd-3c985bbcf3048de5676ee520855438e3de1fa1a50b8bb8ce12dc89f748e3c0f5.scope - libcontainer container 3c985bbcf3048de5676ee520855438e3de1fa1a50b8bb8ce12dc89f748e3c0f5. Nov 12 20:54:01.578298 containerd[1456]: time="2024-11-12T20:54:01.578214570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a50d648024a329878d7d5ac32f66b749577947dc186adf468fb92980ac8a0a7\"" Nov 12 20:54:01.580033 kubelet[2149]: E1112 20:54:01.579998 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:01.583029 containerd[1456]: time="2024-11-12T20:54:01.582977093Z" level=info msg="CreateContainer within sandbox \"6a50d648024a329878d7d5ac32f66b749577947dc186adf468fb92980ac8a0a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:54:01.588825 containerd[1456]: time="2024-11-12T20:54:01.588468260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23684de7490a7fc0d997deb1c41afec244f4491602c3c2026170bb8145927c3\"" Nov 12 20:54:01.589011 kubelet[2149]: E1112 20:54:01.588983 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:01.590419 containerd[1456]: time="2024-11-12T20:54:01.590385294Z" level=info msg="CreateContainer within sandbox \"a23684de7490a7fc0d997deb1c41afec244f4491602c3c2026170bb8145927c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:54:01.598934 containerd[1456]: time="2024-11-12T20:54:01.598860120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9670d5142020e87ec5bb7dbda8890348,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c985bbcf3048de5676ee520855438e3de1fa1a50b8bb8ce12dc89f748e3c0f5\"" Nov 12 20:54:01.599696 kubelet[2149]: E1112 20:54:01.599620 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:01.601202 containerd[1456]: time="2024-11-12T20:54:01.601167035Z" level=info msg="CreateContainer within sandbox \"3c985bbcf3048de5676ee520855438e3de1fa1a50b8bb8ce12dc89f748e3c0f5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:54:01.675926 containerd[1456]: time="2024-11-12T20:54:01.675842293Z" level=info msg="CreateContainer within sandbox \"6a50d648024a329878d7d5ac32f66b749577947dc186adf468fb92980ac8a0a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3caa7e0cbeace5760686cd8c36ffee50485a7198c0fcbd55a4ff6ae41992f77c\"" Nov 12 20:54:01.676692 containerd[1456]: time="2024-11-12T20:54:01.676657938Z" level=info msg="StartContainer for \"3caa7e0cbeace5760686cd8c36ffee50485a7198c0fcbd55a4ff6ae41992f77c\"" Nov 12 20:54:01.681373 containerd[1456]: time="2024-11-12T20:54:01.681310738Z" level=info msg="CreateContainer within sandbox \"a23684de7490a7fc0d997deb1c41afec244f4491602c3c2026170bb8145927c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f9ce82b4964fe01e63168b1cad113dc2db7960c3b67157f5853dcbbb9421465\"" Nov 12 20:54:01.681863 containerd[1456]: time="2024-11-12T20:54:01.681786288Z" level=info msg="StartContainer for \"1f9ce82b4964fe01e63168b1cad113dc2db7960c3b67157f5853dcbbb9421465\"" Nov 12 20:54:01.683601 containerd[1456]: time="2024-11-12T20:54:01.683558946Z" level=info msg="CreateContainer within sandbox \"3c985bbcf3048de5676ee520855438e3de1fa1a50b8bb8ce12dc89f748e3c0f5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6881a9307540fc45737a50d5630d52be0a79f5bfd8341bf1386498dd2a48778b\"" Nov 12 20:54:01.684197 containerd[1456]: time="2024-11-12T20:54:01.684075181Z" level=info msg="StartContainer for \"6881a9307540fc45737a50d5630d52be0a79f5bfd8341bf1386498dd2a48778b\"" Nov 12 20:54:01.741223 systemd[1]: Started cri-containerd-1f9ce82b4964fe01e63168b1cad113dc2db7960c3b67157f5853dcbbb9421465.scope - libcontainer container 1f9ce82b4964fe01e63168b1cad113dc2db7960c3b67157f5853dcbbb9421465. Nov 12 20:54:01.742694 systemd[1]: Started cri-containerd-6881a9307540fc45737a50d5630d52be0a79f5bfd8341bf1386498dd2a48778b.scope - libcontainer container 6881a9307540fc45737a50d5630d52be0a79f5bfd8341bf1386498dd2a48778b. Nov 12 20:54:01.747711 systemd[1]: Started cri-containerd-3caa7e0cbeace5760686cd8c36ffee50485a7198c0fcbd55a4ff6ae41992f77c.scope - libcontainer container 3caa7e0cbeace5760686cd8c36ffee50485a7198c0fcbd55a4ff6ae41992f77c. Nov 12 20:54:01.807012 containerd[1456]: time="2024-11-12T20:54:01.806723540Z" level=info msg="StartContainer for \"6881a9307540fc45737a50d5630d52be0a79f5bfd8341bf1386498dd2a48778b\" returns successfully" Nov 12 20:54:01.807012 containerd[1456]: time="2024-11-12T20:54:01.806896019Z" level=info msg="StartContainer for \"1f9ce82b4964fe01e63168b1cad113dc2db7960c3b67157f5853dcbbb9421465\" returns successfully" Nov 12 20:54:01.826726 containerd[1456]: time="2024-11-12T20:54:01.826640917Z" level=info msg="StartContainer for \"3caa7e0cbeace5760686cd8c36ffee50485a7198c0fcbd55a4ff6ae41992f77c\" returns successfully" Nov 12 20:54:02.641506 kubelet[2149]: E1112 20:54:02.641414 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:02.648456 kubelet[2149]: E1112 20:54:02.648405 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:02.651011 kubelet[2149]: E1112 20:54:02.650984 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:02.862200 kubelet[2149]: I1112 20:54:02.862156 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:54:03.303053 kubelet[2149]: E1112 20:54:03.302990 2149 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 20:54:03.392694 kubelet[2149]: I1112 20:54:03.392624 2149 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 20:54:03.547171 kubelet[2149]: I1112 20:54:03.547115 2149 apiserver.go:52] "Watching apiserver" Nov 12 20:54:03.574011 kubelet[2149]: I1112 20:54:03.573873 2149 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:54:03.656435 kubelet[2149]: E1112 20:54:03.656399 2149 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:03.657032 kubelet[2149]: E1112 20:54:03.656400 2149 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:03.657105 kubelet[2149]: E1112 20:54:03.656448 2149 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 12 20:54:03.657208 kubelet[2149]: E1112 20:54:03.657190 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:03.657208 kubelet[2149]: E1112 20:54:03.657195 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:03.657312 kubelet[2149]: E1112 20:54:03.657191 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:04.658449 kubelet[2149]: E1112 20:54:04.658377 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:05.379466 systemd[1]: Reloading requested from client PID 2440 ('systemctl') (unit session-7.scope)... Nov 12 20:54:05.379489 systemd[1]: Reloading... Nov 12 20:54:05.481982 zram_generator::config[2480]: No configuration found. Nov 12 20:54:05.600895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:05.655378 kubelet[2149]: E1112 20:54:05.655256 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:05.713507 systemd[1]: Reloading finished in 333 ms. Nov 12 20:54:05.758449 kubelet[2149]: I1112 20:54:05.758405 2149 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:54:05.758459 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:05.779263 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:54:05.779539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:05.779588 systemd[1]: kubelet.service: Consumed 1.064s CPU time, 123.3M memory peak, 0B memory swap peak. Nov 12 20:54:05.788308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:05.937873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:05.943155 (kubelet)[2524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:54:05.989972 kubelet[2524]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:05.989972 kubelet[2524]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:54:05.989972 kubelet[2524]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:05.989972 kubelet[2524]: I1112 20:54:05.989826 2524 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:54:05.998781 kubelet[2524]: I1112 20:54:05.997138 2524 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:54:05.998781 kubelet[2524]: I1112 20:54:05.997169 2524 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:54:05.998781 kubelet[2524]: I1112 20:54:05.997462 2524 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:54:05.998781 kubelet[2524]: I1112 20:54:05.998777 2524 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:54:06.000993 kubelet[2524]: I1112 20:54:06.000958 2524 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:54:06.004023 kubelet[2524]: E1112 20:54:06.003997 2524 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:54:06.004023 kubelet[2524]: I1112 20:54:06.004021 2524 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:54:06.010962 kubelet[2524]: I1112 20:54:06.010929 2524 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:54:06.011231 kubelet[2524]: I1112 20:54:06.011207 2524 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:54:06.011436 kubelet[2524]: I1112 20:54:06.011400 2524 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:54:06.011631 kubelet[2524]: I1112 20:54:06.011430 2524 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:54:06.011631 kubelet[2524]: I1112 20:54:06.011629 2524 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:54:06.011753 kubelet[2524]: I1112 20:54:06.011638 2524 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:54:06.011753 kubelet[2524]: I1112 20:54:06.011682 2524 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:06.011813 kubelet[2524]: I1112 20:54:06.011798 2524 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:54:06.011890 kubelet[2524]: I1112 20:54:06.011849 2524 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:54:06.011890 kubelet[2524]: I1112 20:54:06.011890 2524 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:54:06.012013 kubelet[2524]: I1112 20:54:06.011921 2524 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:54:06.013230 kubelet[2524]: I1112 20:54:06.012999 2524 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:54:06.016948 kubelet[2524]: I1112 20:54:06.014207 2524 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:54:06.016948 kubelet[2524]: I1112 20:54:06.015020 2524 server.go:1269] "Started kubelet" Nov 12 20:54:06.016948 kubelet[2524]: I1112 20:54:06.015515 2524 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:54:06.016948 kubelet[2524]: I1112 20:54:06.016800 2524 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:54:06.017680 kubelet[2524]: I1112 20:54:06.017658 2524 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:54:06.018121 kubelet[2524]: I1112 20:54:06.018077 2524 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:54:06.018370 kubelet[2524]: I1112 20:54:06.018355 2524 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:54:06.059276 kubelet[2524]: I1112 20:54:06.059241 2524 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:54:06.059633 kubelet[2524]: I1112 20:54:06.059617 2524 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:54:06.061008 kubelet[2524]: E1112 20:54:06.060863 2524 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:06.061726 kubelet[2524]: I1112 20:54:06.061700 2524 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:54:06.062876 kubelet[2524]: I1112 20:54:06.062858 2524 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:54:06.063108 kubelet[2524]: I1112 20:54:06.062864 2524 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:54:06.063402 kubelet[2524]: I1112 20:54:06.063345 2524 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:54:06.067279 kubelet[2524]: I1112 20:54:06.066714 2524 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:54:06.072252 kubelet[2524]: I1112 20:54:06.072094 2524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:54:06.074029 kubelet[2524]: I1112 20:54:06.073993 2524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:54:06.074029 kubelet[2524]: I1112 20:54:06.074029 2524 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:54:06.074114 kubelet[2524]: I1112 20:54:06.074049 2524 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:54:06.074114 kubelet[2524]: E1112 20:54:06.074093 2524 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:54:06.078026 kubelet[2524]: E1112 20:54:06.078001 2524 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:54:06.127542 kubelet[2524]: I1112 20:54:06.127500 2524 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:54:06.127542 kubelet[2524]: I1112 20:54:06.127522 2524 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:54:06.127542 kubelet[2524]: I1112 20:54:06.127545 2524 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:06.128218 kubelet[2524]: I1112 20:54:06.128189 2524 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:54:06.128252 kubelet[2524]: I1112 20:54:06.128206 2524 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:54:06.128252 kubelet[2524]: I1112 20:54:06.128228 2524 policy_none.go:49] "None policy: Start" Nov 12 20:54:06.128821 kubelet[2524]: I1112 20:54:06.128800 2524 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:54:06.128821 kubelet[2524]: I1112 20:54:06.128822 2524 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:54:06.128992 kubelet[2524]: I1112 20:54:06.128977 2524 state_mem.go:75] "Updated machine memory state" Nov 12 20:54:06.134500 kubelet[2524]: I1112 20:54:06.134343 2524 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:54:06.134646 kubelet[2524]: I1112 20:54:06.134622 2524 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:54:06.134713 kubelet[2524]: I1112 20:54:06.134671 2524 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:54:06.135329 kubelet[2524]: I1112 20:54:06.135311 2524 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:54:06.195242 kubelet[2524]: E1112 20:54:06.195107 2524 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:06.240941 kubelet[2524]: I1112 20:54:06.240892 2524 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:54:06.249098 kubelet[2524]: I1112 20:54:06.249052 2524 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Nov 12 20:54:06.249255 kubelet[2524]: I1112 20:54:06.249141 2524 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 20:54:06.262226 kubelet[2524]: I1112 20:54:06.262181 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9670d5142020e87ec5bb7dbda8890348-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9670d5142020e87ec5bb7dbda8890348\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:06.262226 kubelet[2524]: I1112 20:54:06.262220 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:06.262406 kubelet[2524]: I1112 20:54:06.262246 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:06.262406 kubelet[2524]: I1112 20:54:06.262272 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:06.262406 kubelet[2524]: I1112 20:54:06.262316 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:06.262406 kubelet[2524]: I1112 20:54:06.262363 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:54:06.262406 kubelet[2524]: I1112 20:54:06.262400 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9670d5142020e87ec5bb7dbda8890348-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9670d5142020e87ec5bb7dbda8890348\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:06.262551 kubelet[2524]: I1112 20:54:06.262437 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9670d5142020e87ec5bb7dbda8890348-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9670d5142020e87ec5bb7dbda8890348\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:06.262551 kubelet[2524]: I1112 20:54:06.262470 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:06.494779 kubelet[2524]: E1112 20:54:06.494508 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:06.494779 kubelet[2524]: E1112 20:54:06.494675 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:06.495461 kubelet[2524]: E1112 20:54:06.495440 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:07.013531 kubelet[2524]: I1112 20:54:07.013461 2524 apiserver.go:52] "Watching apiserver" Nov 12 20:54:07.060040 kubelet[2524]: I1112 20:54:07.059978 2524 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:54:07.092687 kubelet[2524]: E1112 20:54:07.092466 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:07.092687 kubelet[2524]: E1112 20:54:07.092578 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:07.092878 kubelet[2524]: E1112 20:54:07.092710 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:07.343142 kubelet[2524]: I1112 20:54:07.342979 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.342954411 podStartE2EDuration="1.342954411s" podCreationTimestamp="2024-11-12 20:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:07.342927252 +0000 UTC m=+1.394675505" watchObservedRunningTime="2024-11-12 20:54:07.342954411 +0000 UTC m=+1.394702654" Nov 12 20:54:07.759690 kubelet[2524]: I1112 20:54:07.759589 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.759569295 podStartE2EDuration="1.759569295s" podCreationTimestamp="2024-11-12 20:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:07.759355812 +0000 UTC m=+1.811104065" watchObservedRunningTime="2024-11-12 20:54:07.759569295 +0000 UTC m=+1.811317538" Nov 12 20:54:07.915924 kubelet[2524]: I1112 20:54:07.915759 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.915734847 podStartE2EDuration="3.915734847s" podCreationTimestamp="2024-11-12 20:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:07.822543541 +0000 UTC m=+1.874291795" watchObservedRunningTime="2024-11-12 20:54:07.915734847 +0000 UTC m=+1.967483090" Nov 12 20:54:08.101523 kubelet[2524]: E1112 20:54:08.101355 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:10.008750 kubelet[2524]: E1112 20:54:10.008704 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:12.192176 sudo[1639]: pam_unix(sudo:session): session closed for user root Nov 12 20:54:12.194473 sshd[1636]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:12.198254 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:34636.service: Deactivated successfully. Nov 12 20:54:12.200153 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:54:12.200326 systemd[1]: session-7.scope: Consumed 5.608s CPU time, 158.8M memory peak, 0B memory swap peak. Nov 12 20:54:12.200787 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:54:12.201769 systemd-logind[1438]: Removed session 7. Nov 12 20:54:12.625264 kubelet[2524]: I1112 20:54:12.625104 2524 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:54:12.625883 kubelet[2524]: I1112 20:54:12.625653 2524 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:54:12.625979 containerd[1456]: time="2024-11-12T20:54:12.625463122Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:54:13.295423 kubelet[2524]: E1112 20:54:13.295357 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:13.488435 systemd[1]: Created slice kubepods-besteffort-pod43e8399e_3308_42ad_a156_088dd8b0a858.slice - libcontainer container kubepods-besteffort-pod43e8399e_3308_42ad_a156_088dd8b0a858.slice. Nov 12 20:54:13.601751 kubelet[2524]: I1112 20:54:13.601566 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4v6s\" (UniqueName: \"kubernetes.io/projected/43e8399e-3308-42ad-a156-088dd8b0a858-kube-api-access-x4v6s\") pod \"kube-proxy-z8v8w\" (UID: \"43e8399e-3308-42ad-a156-088dd8b0a858\") " pod="kube-system/kube-proxy-z8v8w" Nov 12 20:54:13.601751 kubelet[2524]: I1112 20:54:13.601620 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43e8399e-3308-42ad-a156-088dd8b0a858-kube-proxy\") pod \"kube-proxy-z8v8w\" (UID: \"43e8399e-3308-42ad-a156-088dd8b0a858\") " pod="kube-system/kube-proxy-z8v8w" Nov 12 20:54:13.601751 kubelet[2524]: I1112 20:54:13.601645 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43e8399e-3308-42ad-a156-088dd8b0a858-xtables-lock\") pod \"kube-proxy-z8v8w\" (UID: \"43e8399e-3308-42ad-a156-088dd8b0a858\") " pod="kube-system/kube-proxy-z8v8w" Nov 12 20:54:13.601751 kubelet[2524]: I1112 20:54:13.601662 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e8399e-3308-42ad-a156-088dd8b0a858-lib-modules\") pod \"kube-proxy-z8v8w\" (UID: \"43e8399e-3308-42ad-a156-088dd8b0a858\") " pod="kube-system/kube-proxy-z8v8w" Nov 12 20:54:13.759632 systemd[1]: Created slice kubepods-besteffort-pod42b3ca6d_e1b3_47ea_bb50_58c26dcc5adb.slice - libcontainer container kubepods-besteffort-pod42b3ca6d_e1b3_47ea_bb50_58c26dcc5adb.slice. Nov 12 20:54:13.799955 kubelet[2524]: E1112 20:54:13.799873 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:13.800628 containerd[1456]: time="2024-11-12T20:54:13.800567556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z8v8w,Uid:43e8399e-3308-42ad-a156-088dd8b0a858,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:13.903033 kubelet[2524]: I1112 20:54:13.902877 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42b3ca6d-e1b3-47ea-bb50-58c26dcc5adb-var-lib-calico\") pod \"tigera-operator-f8bc97d4c-zlqdj\" (UID: \"42b3ca6d-e1b3-47ea-bb50-58c26dcc5adb\") " pod="tigera-operator/tigera-operator-f8bc97d4c-zlqdj" Nov 12 20:54:13.903033 kubelet[2524]: I1112 20:54:13.902952 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsnnp\" (UniqueName: \"kubernetes.io/projected/42b3ca6d-e1b3-47ea-bb50-58c26dcc5adb-kube-api-access-xsnnp\") pod \"tigera-operator-f8bc97d4c-zlqdj\" (UID: \"42b3ca6d-e1b3-47ea-bb50-58c26dcc5adb\") " pod="tigera-operator/tigera-operator-f8bc97d4c-zlqdj" Nov 12 20:54:14.062673 containerd[1456]: time="2024-11-12T20:54:14.062618129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-zlqdj,Uid:42b3ca6d-e1b3-47ea-bb50-58c26dcc5adb,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:54:14.083377 containerd[1456]: time="2024-11-12T20:54:14.083226808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:14.083377 containerd[1456]: time="2024-11-12T20:54:14.083292503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:14.083377 containerd[1456]: time="2024-11-12T20:54:14.083305280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:14.083609 containerd[1456]: time="2024-11-12T20:54:14.083403422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:14.110183 kubelet[2524]: E1112 20:54:14.110138 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:14.111154 systemd[1]: Started cri-containerd-552d5df187a37d648e3c2a40f92046e635020a5d97017b9c653af7c6cf21965a.scope - libcontainer container 552d5df187a37d648e3c2a40f92046e635020a5d97017b9c653af7c6cf21965a. Nov 12 20:54:14.136776 containerd[1456]: time="2024-11-12T20:54:14.136719935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z8v8w,Uid:43e8399e-3308-42ad-a156-088dd8b0a858,Namespace:kube-system,Attempt:0,} returns sandbox id \"552d5df187a37d648e3c2a40f92046e635020a5d97017b9c653af7c6cf21965a\"" Nov 12 20:54:14.137640 kubelet[2524]: E1112 20:54:14.137615 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:14.139865 containerd[1456]: time="2024-11-12T20:54:14.139827933Z" level=info msg="CreateContainer within sandbox \"552d5df187a37d648e3c2a40f92046e635020a5d97017b9c653af7c6cf21965a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:54:14.545430 kubelet[2524]: E1112 20:54:14.545387 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:14.717412 systemd[1]: run-containerd-runc-k8s.io-552d5df187a37d648e3c2a40f92046e635020a5d97017b9c653af7c6cf21965a-runc.vML7ur.mount: Deactivated successfully. Nov 12 20:54:15.113115 kubelet[2524]: E1112 20:54:15.113055 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:15.639499 containerd[1456]: time="2024-11-12T20:54:15.639432903Z" level=info msg="CreateContainer within sandbox \"552d5df187a37d648e3c2a40f92046e635020a5d97017b9c653af7c6cf21965a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d38e1651b77623ded5ee070530daa119ab77218c20a7e61ec20de153f05b9748\"" Nov 12 20:54:15.640183 containerd[1456]: time="2024-11-12T20:54:15.640161048Z" level=info msg="StartContainer for \"d38e1651b77623ded5ee070530daa119ab77218c20a7e61ec20de153f05b9748\"" Nov 12 20:54:15.646346 containerd[1456]: time="2024-11-12T20:54:15.644423551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:15.646346 containerd[1456]: time="2024-11-12T20:54:15.644503064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:15.646346 containerd[1456]: time="2024-11-12T20:54:15.644517414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:15.646346 containerd[1456]: time="2024-11-12T20:54:15.644609352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:15.664127 systemd[1]: Started cri-containerd-6f1326fd58b24cacb4f82302c4dc5361e24b3d87e2b4a4cf6ab8a2a3853b7963.scope - libcontainer container 6f1326fd58b24cacb4f82302c4dc5361e24b3d87e2b4a4cf6ab8a2a3853b7963. Nov 12 20:54:15.667465 systemd[1]: Started cri-containerd-d38e1651b77623ded5ee070530daa119ab77218c20a7e61ec20de153f05b9748.scope - libcontainer container d38e1651b77623ded5ee070530daa119ab77218c20a7e61ec20de153f05b9748. Nov 12 20:54:15.733277 containerd[1456]: time="2024-11-12T20:54:15.733229928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-zlqdj,Uid:42b3ca6d-e1b3-47ea-bb50-58c26dcc5adb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6f1326fd58b24cacb4f82302c4dc5361e24b3d87e2b4a4cf6ab8a2a3853b7963\"" Nov 12 20:54:15.733578 containerd[1456]: time="2024-11-12T20:54:15.733253826Z" level=info msg="StartContainer for \"d38e1651b77623ded5ee070530daa119ab77218c20a7e61ec20de153f05b9748\" returns successfully" Nov 12 20:54:15.737189 containerd[1456]: time="2024-11-12T20:54:15.735982529Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:54:16.116091 kubelet[2524]: E1112 20:54:16.116055 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:16.117591 kubelet[2524]: E1112 20:54:16.117541 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:17.118774 kubelet[2524]: E1112 20:54:17.118725 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:17.906320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318682598.mount: Deactivated successfully. Nov 12 20:54:18.479813 containerd[1456]: time="2024-11-12T20:54:18.479744655Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:18.490815 containerd[1456]: time="2024-11-12T20:54:18.490763178Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763339" Nov 12 20:54:18.523472 containerd[1456]: time="2024-11-12T20:54:18.523427106Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:18.548574 containerd[1456]: time="2024-11-12T20:54:18.548531873Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:18.549458 containerd[1456]: time="2024-11-12T20:54:18.549422177Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 2.813402011s" Nov 12 20:54:18.549501 containerd[1456]: time="2024-11-12T20:54:18.549458021Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:54:18.551282 containerd[1456]: time="2024-11-12T20:54:18.551251195Z" level=info msg="CreateContainer within sandbox \"6f1326fd58b24cacb4f82302c4dc5361e24b3d87e2b4a4cf6ab8a2a3853b7963\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:54:18.637885 containerd[1456]: time="2024-11-12T20:54:18.637802821Z" level=info msg="CreateContainer within sandbox \"6f1326fd58b24cacb4f82302c4dc5361e24b3d87e2b4a4cf6ab8a2a3853b7963\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e177640a61d45b3298d0f84fc5c05ce9fb061a70cab47ec72bd8e428e3c3ba72\"" Nov 12 20:54:18.638419 containerd[1456]: time="2024-11-12T20:54:18.638395328Z" level=info msg="StartContainer for \"e177640a61d45b3298d0f84fc5c05ce9fb061a70cab47ec72bd8e428e3c3ba72\"" Nov 12 20:54:18.669177 systemd[1]: Started cri-containerd-e177640a61d45b3298d0f84fc5c05ce9fb061a70cab47ec72bd8e428e3c3ba72.scope - libcontainer container e177640a61d45b3298d0f84fc5c05ce9fb061a70cab47ec72bd8e428e3c3ba72. Nov 12 20:54:18.696091 containerd[1456]: time="2024-11-12T20:54:18.696042788Z" level=info msg="StartContainer for \"e177640a61d45b3298d0f84fc5c05ce9fb061a70cab47ec72bd8e428e3c3ba72\" returns successfully" Nov 12 20:54:19.132369 kubelet[2524]: I1112 20:54:19.132290 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z8v8w" podStartSLOduration=6.132265829 podStartE2EDuration="6.132265829s" podCreationTimestamp="2024-11-12 20:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:16.1611489 +0000 UTC m=+10.212897143" watchObservedRunningTime="2024-11-12 20:54:19.132265829 +0000 UTC m=+13.184014073" Nov 12 20:54:19.133051 kubelet[2524]: I1112 20:54:19.132426 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-f8bc97d4c-zlqdj" podStartSLOduration=3.317100836 podStartE2EDuration="6.132419151s" podCreationTimestamp="2024-11-12 20:54:13 +0000 UTC" firstStartedPulling="2024-11-12 20:54:15.734806103 +0000 UTC m=+9.786554346" lastFinishedPulling="2024-11-12 20:54:18.550124418 +0000 UTC m=+12.601872661" observedRunningTime="2024-11-12 20:54:19.132116205 +0000 UTC m=+13.183864448" watchObservedRunningTime="2024-11-12 20:54:19.132419151 +0000 UTC m=+13.184167394" Nov 12 20:54:20.013502 kubelet[2524]: E1112 20:54:20.013456 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:21.653524 kubelet[2524]: I1112 20:54:21.653474 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad0651ff-d987-4426-8ba1-32820a663382-xtables-lock\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.654115 systemd[1]: Created slice kubepods-besteffort-podbf566e7a_6fae_4806_9e13_325f5b6a2cd6.slice - libcontainer container kubepods-besteffort-podbf566e7a_6fae_4806_9e13_325f5b6a2cd6.slice. Nov 12 20:54:21.658201 kubelet[2524]: I1112 20:54:21.658167 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ad0651ff-d987-4426-8ba1-32820a663382-var-run-calico\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.659274 kubelet[2524]: I1112 20:54:21.659242 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bf566e7a-6fae-4806-9e13-325f5b6a2cd6-typha-certs\") pod \"calico-typha-6747554b69-wlfvm\" (UID: \"bf566e7a-6fae-4806-9e13-325f5b6a2cd6\") " pod="calico-system/calico-typha-6747554b69-wlfvm" Nov 12 20:54:21.659387 kubelet[2524]: I1112 20:54:21.659370 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4lpb\" (UniqueName: \"kubernetes.io/projected/ad0651ff-d987-4426-8ba1-32820a663382-kube-api-access-f4lpb\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.659482 kubelet[2524]: I1112 20:54:21.659463 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ad0651ff-d987-4426-8ba1-32820a663382-cni-net-dir\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.659603 kubelet[2524]: I1112 20:54:21.659582 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ad0651ff-d987-4426-8ba1-32820a663382-node-certs\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.660701 kubelet[2524]: I1112 20:54:21.660676 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad0651ff-d987-4426-8ba1-32820a663382-lib-modules\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.660840 kubelet[2524]: I1112 20:54:21.660820 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad0651ff-d987-4426-8ba1-32820a663382-var-lib-calico\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.660963 kubelet[2524]: I1112 20:54:21.660943 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ad0651ff-d987-4426-8ba1-32820a663382-cni-bin-dir\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.661086 kubelet[2524]: I1112 20:54:21.661054 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fb2t\" (UniqueName: \"kubernetes.io/projected/bf566e7a-6fae-4806-9e13-325f5b6a2cd6-kube-api-access-5fb2t\") pod \"calico-typha-6747554b69-wlfvm\" (UID: \"bf566e7a-6fae-4806-9e13-325f5b6a2cd6\") " pod="calico-system/calico-typha-6747554b69-wlfvm" Nov 12 20:54:21.661225 kubelet[2524]: I1112 20:54:21.661207 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ad0651ff-d987-4426-8ba1-32820a663382-policysync\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.661423 kubelet[2524]: I1112 20:54:21.661399 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf566e7a-6fae-4806-9e13-325f5b6a2cd6-tigera-ca-bundle\") pod \"calico-typha-6747554b69-wlfvm\" (UID: \"bf566e7a-6fae-4806-9e13-325f5b6a2cd6\") " pod="calico-system/calico-typha-6747554b69-wlfvm" Nov 12 20:54:21.661515 kubelet[2524]: I1112 20:54:21.661499 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad0651ff-d987-4426-8ba1-32820a663382-tigera-ca-bundle\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.661654 kubelet[2524]: I1112 20:54:21.661598 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ad0651ff-d987-4426-8ba1-32820a663382-cni-log-dir\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.661654 kubelet[2524]: I1112 20:54:21.661633 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ad0651ff-d987-4426-8ba1-32820a663382-flexvol-driver-host\") pod \"calico-node-h2prm\" (UID: \"ad0651ff-d987-4426-8ba1-32820a663382\") " pod="calico-system/calico-node-h2prm" Nov 12 20:54:21.669114 systemd[1]: Created slice kubepods-besteffort-podad0651ff_d987_4426_8ba1_32820a663382.slice - libcontainer container kubepods-besteffort-podad0651ff_d987_4426_8ba1_32820a663382.slice. Nov 12 20:54:21.735365 kubelet[2524]: E1112 20:54:21.735282 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzdb2" podUID="6bb7ebc4-d76d-43f4-9467-1cf6406d5a57" Nov 12 20:54:21.761953 kubelet[2524]: I1112 20:54:21.761866 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bb7ebc4-d76d-43f4-9467-1cf6406d5a57-registration-dir\") pod \"csi-node-driver-gzdb2\" (UID: \"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57\") " pod="calico-system/csi-node-driver-gzdb2" Nov 12 20:54:21.762136 kubelet[2524]: I1112 20:54:21.761995 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6bb7ebc4-d76d-43f4-9467-1cf6406d5a57-varrun\") pod \"csi-node-driver-gzdb2\" (UID: \"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57\") " pod="calico-system/csi-node-driver-gzdb2" Nov 12 20:54:21.762136 kubelet[2524]: I1112 20:54:21.762063 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bb7ebc4-d76d-43f4-9467-1cf6406d5a57-socket-dir\") pod \"csi-node-driver-gzdb2\" (UID: \"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57\") " pod="calico-system/csi-node-driver-gzdb2" Nov 12 20:54:21.762136 kubelet[2524]: I1112 20:54:21.762083 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb7ebc4-d76d-43f4-9467-1cf6406d5a57-kubelet-dir\") pod \"csi-node-driver-gzdb2\" (UID: \"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57\") " pod="calico-system/csi-node-driver-gzdb2" Nov 12 20:54:21.762264 kubelet[2524]: I1112 20:54:21.762191 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd6zh\" (UniqueName: \"kubernetes.io/projected/6bb7ebc4-d76d-43f4-9467-1cf6406d5a57-kube-api-access-pd6zh\") pod \"csi-node-driver-gzdb2\" (UID: \"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57\") " pod="calico-system/csi-node-driver-gzdb2" Nov 12 20:54:21.775941 kubelet[2524]: E1112 20:54:21.769533 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.775941 kubelet[2524]: W1112 20:54:21.769565 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.775941 kubelet[2524]: E1112 20:54:21.769597 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.775941 kubelet[2524]: E1112 20:54:21.774407 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.775941 kubelet[2524]: W1112 20:54:21.774431 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.775941 kubelet[2524]: E1112 20:54:21.774457 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.780201 kubelet[2524]: E1112 20:54:21.777794 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.780201 kubelet[2524]: W1112 20:54:21.777816 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.780201 kubelet[2524]: E1112 20:54:21.777839 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.797929 kubelet[2524]: E1112 20:54:21.793951 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.797929 kubelet[2524]: W1112 20:54:21.793984 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.797929 kubelet[2524]: E1112 20:54:21.794011 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.811180 kubelet[2524]: E1112 20:54:21.811142 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.811424 kubelet[2524]: W1112 20:54:21.811345 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.811424 kubelet[2524]: E1112 20:54:21.811379 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.863271 kubelet[2524]: E1112 20:54:21.863228 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.863271 kubelet[2524]: W1112 20:54:21.863260 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.863444 kubelet[2524]: E1112 20:54:21.863297 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.863682 kubelet[2524]: E1112 20:54:21.863650 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.863682 kubelet[2524]: W1112 20:54:21.863666 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.863682 kubelet[2524]: E1112 20:54:21.863683 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.864027 kubelet[2524]: E1112 20:54:21.863999 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.864027 kubelet[2524]: W1112 20:54:21.864016 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.864102 kubelet[2524]: E1112 20:54:21.864034 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.864342 kubelet[2524]: E1112 20:54:21.864315 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.864342 kubelet[2524]: W1112 20:54:21.864335 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.864428 kubelet[2524]: E1112 20:54:21.864352 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.864705 kubelet[2524]: E1112 20:54:21.864684 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.864705 kubelet[2524]: W1112 20:54:21.864701 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.864938 kubelet[2524]: E1112 20:54:21.864868 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.865076 kubelet[2524]: E1112 20:54:21.865047 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.865076 kubelet[2524]: W1112 20:54:21.865068 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.865192 kubelet[2524]: E1112 20:54:21.865159 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.865357 kubelet[2524]: E1112 20:54:21.865337 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.865357 kubelet[2524]: W1112 20:54:21.865352 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.865452 kubelet[2524]: E1112 20:54:21.865388 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.865593 kubelet[2524]: E1112 20:54:21.865577 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.865681 kubelet[2524]: W1112 20:54:21.865593 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.865722 kubelet[2524]: E1112 20:54:21.865698 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.865826 kubelet[2524]: E1112 20:54:21.865811 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.865853 kubelet[2524]: W1112 20:54:21.865824 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.866023 kubelet[2524]: E1112 20:54:21.865892 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.866057 kubelet[2524]: E1112 20:54:21.866043 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.866057 kubelet[2524]: W1112 20:54:21.866054 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.866113 kubelet[2524]: E1112 20:54:21.866069 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.866400 kubelet[2524]: E1112 20:54:21.866369 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.866439 kubelet[2524]: W1112 20:54:21.866398 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.866506 kubelet[2524]: E1112 20:54:21.866481 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.866717 kubelet[2524]: E1112 20:54:21.866700 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.866717 kubelet[2524]: W1112 20:54:21.866714 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.866851 kubelet[2524]: E1112 20:54:21.866826 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.867002 kubelet[2524]: E1112 20:54:21.866955 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.867002 kubelet[2524]: W1112 20:54:21.866972 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.867230 kubelet[2524]: E1112 20:54:21.867065 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.867266 kubelet[2524]: E1112 20:54:21.867258 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.867322 kubelet[2524]: W1112 20:54:21.867269 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.867392 kubelet[2524]: E1112 20:54:21.867372 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.867530 kubelet[2524]: E1112 20:54:21.867515 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.867594 kubelet[2524]: W1112 20:54:21.867528 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.867594 kubelet[2524]: E1112 20:54:21.867559 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.867756 kubelet[2524]: E1112 20:54:21.867743 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.867796 kubelet[2524]: W1112 20:54:21.867755 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.867822 kubelet[2524]: E1112 20:54:21.867810 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.868093 kubelet[2524]: E1112 20:54:21.868079 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.868124 kubelet[2524]: W1112 20:54:21.868093 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.868124 kubelet[2524]: E1112 20:54:21.868109 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.868360 kubelet[2524]: E1112 20:54:21.868345 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.868360 kubelet[2524]: W1112 20:54:21.868359 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.868424 kubelet[2524]: E1112 20:54:21.868377 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.868686 kubelet[2524]: E1112 20:54:21.868671 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.868719 kubelet[2524]: W1112 20:54:21.868686 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.868719 kubelet[2524]: E1112 20:54:21.868704 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.869042 kubelet[2524]: E1112 20:54:21.869027 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.869082 kubelet[2524]: W1112 20:54:21.869042 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.869159 kubelet[2524]: E1112 20:54:21.869119 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.869314 kubelet[2524]: E1112 20:54:21.869300 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.869342 kubelet[2524]: W1112 20:54:21.869313 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.869411 kubelet[2524]: E1112 20:54:21.869392 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.869700 kubelet[2524]: E1112 20:54:21.869687 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.869749 kubelet[2524]: W1112 20:54:21.869702 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.869778 kubelet[2524]: E1112 20:54:21.869737 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.869998 kubelet[2524]: E1112 20:54:21.869983 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.870034 kubelet[2524]: W1112 20:54:21.869999 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.870034 kubelet[2524]: E1112 20:54:21.870026 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.870233 kubelet[2524]: E1112 20:54:21.870219 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.870285 kubelet[2524]: W1112 20:54:21.870232 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.870320 kubelet[2524]: E1112 20:54:21.870306 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.870482 kubelet[2524]: E1112 20:54:21.870468 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.870512 kubelet[2524]: W1112 20:54:21.870482 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.870512 kubelet[2524]: E1112 20:54:21.870494 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.882541 kubelet[2524]: E1112 20:54:21.882236 2524 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:21.882541 kubelet[2524]: W1112 20:54:21.882259 2524 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:21.882541 kubelet[2524]: E1112 20:54:21.882282 2524 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:21.964497 kubelet[2524]: E1112 20:54:21.964459 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:21.965152 containerd[1456]: time="2024-11-12T20:54:21.964931677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6747554b69-wlfvm,Uid:bf566e7a-6fae-4806-9e13-325f5b6a2cd6,Namespace:calico-system,Attempt:0,}" Nov 12 20:54:21.975224 kubelet[2524]: E1112 20:54:21.975178 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:21.976297 containerd[1456]: time="2024-11-12T20:54:21.976047273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h2prm,Uid:ad0651ff-d987-4426-8ba1-32820a663382,Namespace:calico-system,Attempt:0,}" Nov 12 20:54:22.429005 containerd[1456]: time="2024-11-12T20:54:22.427932988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:22.429005 containerd[1456]: time="2024-11-12T20:54:22.428953189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:22.429005 containerd[1456]: time="2024-11-12T20:54:22.428975514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:22.429639 containerd[1456]: time="2024-11-12T20:54:22.429578412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:22.437169 containerd[1456]: time="2024-11-12T20:54:22.437047946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:22.437333 containerd[1456]: time="2024-11-12T20:54:22.437129240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:22.437333 containerd[1456]: time="2024-11-12T20:54:22.437163719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:22.437448 containerd[1456]: time="2024-11-12T20:54:22.437340818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:22.454615 systemd[1]: Started cri-containerd-392f484ad765d3d3135c4b5739094109ec052a5b86bebf31487def54a49cad73.scope - libcontainer container 392f484ad765d3d3135c4b5739094109ec052a5b86bebf31487def54a49cad73. Nov 12 20:54:22.468250 systemd[1]: Started cri-containerd-41228514a3497a46be35fbe8933316b33069522882c30085f3a34f45bdb8495b.scope - libcontainer container 41228514a3497a46be35fbe8933316b33069522882c30085f3a34f45bdb8495b. Nov 12 20:54:22.498371 containerd[1456]: time="2024-11-12T20:54:22.498292950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h2prm,Uid:ad0651ff-d987-4426-8ba1-32820a663382,Namespace:calico-system,Attempt:0,} returns sandbox id \"392f484ad765d3d3135c4b5739094109ec052a5b86bebf31487def54a49cad73\"" Nov 12 20:54:22.499354 kubelet[2524]: E1112 20:54:22.499321 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:22.501492 containerd[1456]: time="2024-11-12T20:54:22.501441082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:54:22.526391 containerd[1456]: time="2024-11-12T20:54:22.526249611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6747554b69-wlfvm,Uid:bf566e7a-6fae-4806-9e13-325f5b6a2cd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"41228514a3497a46be35fbe8933316b33069522882c30085f3a34f45bdb8495b\"" Nov 12 20:54:22.527474 kubelet[2524]: E1112 20:54:22.527432 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:23.074948 kubelet[2524]: E1112 20:54:23.074868 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzdb2" podUID="6bb7ebc4-d76d-43f4-9467-1cf6406d5a57" Nov 12 20:54:23.947441 containerd[1456]: time="2024-11-12T20:54:23.947365305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:23.948323 containerd[1456]: time="2024-11-12T20:54:23.948242905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:54:23.949340 containerd[1456]: time="2024-11-12T20:54:23.949299065Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:23.951652 containerd[1456]: time="2024-11-12T20:54:23.951563772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:23.952318 containerd[1456]: time="2024-11-12T20:54:23.952274736Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.450778572s" Nov 12 20:54:23.952318 containerd[1456]: time="2024-11-12T20:54:23.952308945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:54:23.953545 containerd[1456]: time="2024-11-12T20:54:23.953319493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:54:23.954873 containerd[1456]: time="2024-11-12T20:54:23.954837985Z" level=info msg="CreateContainer within sandbox \"392f484ad765d3d3135c4b5739094109ec052a5b86bebf31487def54a49cad73\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:54:23.976043 containerd[1456]: time="2024-11-12T20:54:23.975990444Z" level=info msg="CreateContainer within sandbox \"392f484ad765d3d3135c4b5739094109ec052a5b86bebf31487def54a49cad73\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eda122bad7106cb10bdf1243090e5d4834b5900054a8bbd6f95470ecad77dd7c\"" Nov 12 20:54:23.976694 containerd[1456]: time="2024-11-12T20:54:23.976637850Z" level=info msg="StartContainer for \"eda122bad7106cb10bdf1243090e5d4834b5900054a8bbd6f95470ecad77dd7c\"" Nov 12 20:54:24.020345 systemd[1]: Started cri-containerd-eda122bad7106cb10bdf1243090e5d4834b5900054a8bbd6f95470ecad77dd7c.scope - libcontainer container eda122bad7106cb10bdf1243090e5d4834b5900054a8bbd6f95470ecad77dd7c. Nov 12 20:54:24.075224 systemd[1]: cri-containerd-eda122bad7106cb10bdf1243090e5d4834b5900054a8bbd6f95470ecad77dd7c.scope: Deactivated successfully. Nov 12 20:54:24.108077 containerd[1456]: time="2024-11-12T20:54:24.108002536Z" level=info msg="StartContainer for \"eda122bad7106cb10bdf1243090e5d4834b5900054a8bbd6f95470ecad77dd7c\" returns successfully" Nov 12 20:54:24.131381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eda122bad7106cb10bdf1243090e5d4834b5900054a8bbd6f95470ecad77dd7c-rootfs.mount: Deactivated successfully. Nov 12 20:54:24.139972 kubelet[2524]: E1112 20:54:24.139273 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:24.140768 containerd[1456]: time="2024-11-12T20:54:24.136935575Z" level=info msg="shim disconnected" id=eda122bad7106cb10bdf1243090e5d4834b5900054a8bbd6f95470ecad77dd7c namespace=k8s.io Nov 12 20:54:24.140938 containerd[1456]: time="2024-11-12T20:54:24.140890086Z" level=warning msg="cleaning up after shim disconnected" id=eda122bad7106cb10bdf1243090e5d4834b5900054a8bbd6f95470ecad77dd7c namespace=k8s.io Nov 12 20:54:24.141029 containerd[1456]: time="2024-11-12T20:54:24.141012552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:54:25.075175 kubelet[2524]: E1112 20:54:25.075113 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzdb2" podUID="6bb7ebc4-d76d-43f4-9467-1cf6406d5a57" Nov 12 20:54:25.142577 kubelet[2524]: E1112 20:54:25.142524 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:26.352635 containerd[1456]: time="2024-11-12T20:54:26.352575948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:26.353429 containerd[1456]: time="2024-11-12T20:54:26.353388669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:54:26.355216 containerd[1456]: time="2024-11-12T20:54:26.355177977Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:26.358040 containerd[1456]: time="2024-11-12T20:54:26.357995569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:26.358910 containerd[1456]: time="2024-11-12T20:54:26.358846324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.405489667s" Nov 12 20:54:26.358951 containerd[1456]: time="2024-11-12T20:54:26.358917938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:54:26.360147 containerd[1456]: time="2024-11-12T20:54:26.360049027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:54:26.370719 containerd[1456]: time="2024-11-12T20:54:26.370665229Z" level=info msg="CreateContainer within sandbox \"41228514a3497a46be35fbe8933316b33069522882c30085f3a34f45bdb8495b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:54:26.389932 containerd[1456]: time="2024-11-12T20:54:26.389855447Z" level=info msg="CreateContainer within sandbox \"41228514a3497a46be35fbe8933316b33069522882c30085f3a34f45bdb8495b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"488f15ba8664e00683a793b07c8a4fa7fe8ef86a3a3dfe10e2d0444806ea760e\"" Nov 12 20:54:26.391609 containerd[1456]: time="2024-11-12T20:54:26.390461342Z" level=info msg="StartContainer for \"488f15ba8664e00683a793b07c8a4fa7fe8ef86a3a3dfe10e2d0444806ea760e\"" Nov 12 20:54:26.426035 systemd[1]: Started cri-containerd-488f15ba8664e00683a793b07c8a4fa7fe8ef86a3a3dfe10e2d0444806ea760e.scope - libcontainer container 488f15ba8664e00683a793b07c8a4fa7fe8ef86a3a3dfe10e2d0444806ea760e. Nov 12 20:54:26.474482 containerd[1456]: time="2024-11-12T20:54:26.474427948Z" level=info msg="StartContainer for \"488f15ba8664e00683a793b07c8a4fa7fe8ef86a3a3dfe10e2d0444806ea760e\" returns successfully" Nov 12 20:54:27.075251 kubelet[2524]: E1112 20:54:27.075178 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzdb2" podUID="6bb7ebc4-d76d-43f4-9467-1cf6406d5a57" Nov 12 20:54:27.147353 kubelet[2524]: E1112 20:54:27.147298 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:28.148826 kubelet[2524]: I1112 20:54:28.148774 2524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:54:28.149354 kubelet[2524]: E1112 20:54:28.149208 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:29.076240 kubelet[2524]: E1112 20:54:29.076185 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzdb2" podUID="6bb7ebc4-d76d-43f4-9467-1cf6406d5a57" Nov 12 20:54:31.516926 kubelet[2524]: E1112 20:54:31.516815 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzdb2" podUID="6bb7ebc4-d76d-43f4-9467-1cf6406d5a57" Nov 12 20:54:31.579681 containerd[1456]: time="2024-11-12T20:54:31.579624485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:31.580595 containerd[1456]: time="2024-11-12T20:54:31.580523082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:54:31.581840 containerd[1456]: time="2024-11-12T20:54:31.581807728Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:31.584347 containerd[1456]: time="2024-11-12T20:54:31.584306719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:31.584971 containerd[1456]: time="2024-11-12T20:54:31.584938346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 5.224851864s" Nov 12 20:54:31.585022 containerd[1456]: time="2024-11-12T20:54:31.584975790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:54:31.595304 containerd[1456]: time="2024-11-12T20:54:31.595249118Z" level=info msg="CreateContainer within sandbox \"392f484ad765d3d3135c4b5739094109ec052a5b86bebf31487def54a49cad73\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:54:31.612213 containerd[1456]: time="2024-11-12T20:54:31.612159899Z" level=info msg="CreateContainer within sandbox \"392f484ad765d3d3135c4b5739094109ec052a5b86bebf31487def54a49cad73\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"74322f7a7602bbd22a7292ba496ef2655fceea9a19ec2124ee6f80dd4e37beed\"" Nov 12 20:54:31.614002 containerd[1456]: time="2024-11-12T20:54:31.612984459Z" level=info msg="StartContainer for \"74322f7a7602bbd22a7292ba496ef2655fceea9a19ec2124ee6f80dd4e37beed\"" Nov 12 20:54:31.654169 systemd[1]: Started cri-containerd-74322f7a7602bbd22a7292ba496ef2655fceea9a19ec2124ee6f80dd4e37beed.scope - libcontainer container 74322f7a7602bbd22a7292ba496ef2655fceea9a19ec2124ee6f80dd4e37beed. Nov 12 20:54:31.689606 containerd[1456]: time="2024-11-12T20:54:31.689559383Z" level=info msg="StartContainer for \"74322f7a7602bbd22a7292ba496ef2655fceea9a19ec2124ee6f80dd4e37beed\" returns successfully" Nov 12 20:54:32.518482 kubelet[2524]: E1112 20:54:32.518416 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:32.545385 kubelet[2524]: I1112 20:54:32.545287 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6747554b69-wlfvm" podStartSLOduration=7.713512141 podStartE2EDuration="11.545263102s" podCreationTimestamp="2024-11-12 20:54:21 +0000 UTC" firstStartedPulling="2024-11-12 20:54:22.528033766 +0000 UTC m=+16.579782009" lastFinishedPulling="2024-11-12 20:54:26.359784727 +0000 UTC m=+20.411532970" observedRunningTime="2024-11-12 20:54:27.2518565 +0000 UTC m=+21.303604743" watchObservedRunningTime="2024-11-12 20:54:32.545263102 +0000 UTC m=+26.597011345" Nov 12 20:54:33.075013 kubelet[2524]: E1112 20:54:33.074941 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzdb2" podUID="6bb7ebc4-d76d-43f4-9467-1cf6406d5a57" Nov 12 20:54:33.519963 kubelet[2524]: E1112 20:54:33.519886 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:33.629413 systemd[1]: cri-containerd-74322f7a7602bbd22a7292ba496ef2655fceea9a19ec2124ee6f80dd4e37beed.scope: Deactivated successfully. Nov 12 20:54:33.652220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74322f7a7602bbd22a7292ba496ef2655fceea9a19ec2124ee6f80dd4e37beed-rootfs.mount: Deactivated successfully. Nov 12 20:54:33.705919 kubelet[2524]: I1112 20:54:33.705639 2524 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 12 20:54:34.018393 systemd[1]: Created slice kubepods-burstable-pod97b0959e_83eb_40de_b1d7_86e881d338a7.slice - libcontainer container kubepods-burstable-pod97b0959e_83eb_40de_b1d7_86e881d338a7.slice. Nov 12 20:54:34.044720 systemd[1]: Created slice kubepods-besteffort-pod4a476cf6_4cc5_49bc_ac05_b79f0197d4f4.slice - libcontainer container kubepods-besteffort-pod4a476cf6_4cc5_49bc_ac05_b79f0197d4f4.slice. Nov 12 20:54:34.050259 systemd[1]: Created slice kubepods-besteffort-pod73906cf1_8520_41f1_9a4b_beeed90ae509.slice - libcontainer container kubepods-besteffort-pod73906cf1_8520_41f1_9a4b_beeed90ae509.slice. Nov 12 20:54:34.056723 systemd[1]: Created slice kubepods-burstable-pod76abd0d6_f821_42df_bb0b_16d0b8a05a4b.slice - libcontainer container kubepods-burstable-pod76abd0d6_f821_42df_bb0b_16d0b8a05a4b.slice. Nov 12 20:54:34.062214 systemd[1]: Created slice kubepods-besteffort-pode60f6ba4_080e_47c8_9607_4ea565272f92.slice - libcontainer container kubepods-besteffort-pode60f6ba4_080e_47c8_9607_4ea565272f92.slice. Nov 12 20:54:34.098620 containerd[1456]: time="2024-11-12T20:54:34.098533054Z" level=info msg="shim disconnected" id=74322f7a7602bbd22a7292ba496ef2655fceea9a19ec2124ee6f80dd4e37beed namespace=k8s.io Nov 12 20:54:34.098620 containerd[1456]: time="2024-11-12T20:54:34.098607141Z" level=warning msg="cleaning up after shim disconnected" id=74322f7a7602bbd22a7292ba496ef2655fceea9a19ec2124ee6f80dd4e37beed namespace=k8s.io Nov 12 20:54:34.098620 containerd[1456]: time="2024-11-12T20:54:34.098616900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:54:34.122148 kubelet[2524]: I1112 20:54:34.121935 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzt8m\" (UniqueName: \"kubernetes.io/projected/97b0959e-83eb-40de-b1d7-86e881d338a7-kube-api-access-mzt8m\") pod \"coredns-6f6b679f8f-brc2t\" (UID: \"97b0959e-83eb-40de-b1d7-86e881d338a7\") " pod="kube-system/coredns-6f6b679f8f-brc2t" Nov 12 20:54:34.122148 kubelet[2524]: I1112 20:54:34.122010 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b0959e-83eb-40de-b1d7-86e881d338a7-config-volume\") pod \"coredns-6f6b679f8f-brc2t\" (UID: \"97b0959e-83eb-40de-b1d7-86e881d338a7\") " pod="kube-system/coredns-6f6b679f8f-brc2t" Nov 12 20:54:34.223359 kubelet[2524]: I1112 20:54:34.223269 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a476cf6-4cc5-49bc-ac05-b79f0197d4f4-tigera-ca-bundle\") pod \"calico-kube-controllers-6cc5bdbb85-fd6v2\" (UID: \"4a476cf6-4cc5-49bc-ac05-b79f0197d4f4\") " pod="calico-system/calico-kube-controllers-6cc5bdbb85-fd6v2" Nov 12 20:54:34.223359 kubelet[2524]: I1112 20:54:34.223371 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8p8f\" (UniqueName: \"kubernetes.io/projected/4a476cf6-4cc5-49bc-ac05-b79f0197d4f4-kube-api-access-n8p8f\") pod \"calico-kube-controllers-6cc5bdbb85-fd6v2\" (UID: \"4a476cf6-4cc5-49bc-ac05-b79f0197d4f4\") " pod="calico-system/calico-kube-controllers-6cc5bdbb85-fd6v2" Nov 12 20:54:34.223622 kubelet[2524]: I1112 20:54:34.223404 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfgzt\" (UniqueName: \"kubernetes.io/projected/76abd0d6-f821-42df-bb0b-16d0b8a05a4b-kube-api-access-lfgzt\") pod \"coredns-6f6b679f8f-6f9q6\" (UID: \"76abd0d6-f821-42df-bb0b-16d0b8a05a4b\") " pod="kube-system/coredns-6f6b679f8f-6f9q6" Nov 12 20:54:34.223622 kubelet[2524]: I1112 20:54:34.223434 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6qm8\" (UniqueName: \"kubernetes.io/projected/e60f6ba4-080e-47c8-9607-4ea565272f92-kube-api-access-k6qm8\") pod \"calico-apiserver-595fc8fb58-zn62j\" (UID: \"e60f6ba4-080e-47c8-9607-4ea565272f92\") " pod="calico-apiserver/calico-apiserver-595fc8fb58-zn62j" Nov 12 20:54:34.223622 kubelet[2524]: I1112 20:54:34.223497 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e60f6ba4-080e-47c8-9607-4ea565272f92-calico-apiserver-certs\") pod \"calico-apiserver-595fc8fb58-zn62j\" (UID: \"e60f6ba4-080e-47c8-9607-4ea565272f92\") " pod="calico-apiserver/calico-apiserver-595fc8fb58-zn62j" Nov 12 20:54:34.223622 kubelet[2524]: I1112 20:54:34.223557 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76abd0d6-f821-42df-bb0b-16d0b8a05a4b-config-volume\") pod \"coredns-6f6b679f8f-6f9q6\" (UID: \"76abd0d6-f821-42df-bb0b-16d0b8a05a4b\") " pod="kube-system/coredns-6f6b679f8f-6f9q6" Nov 12 20:54:34.223622 kubelet[2524]: I1112 20:54:34.223587 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73906cf1-8520-41f1-9a4b-beeed90ae509-calico-apiserver-certs\") pod \"calico-apiserver-595fc8fb58-pgc2v\" (UID: \"73906cf1-8520-41f1-9a4b-beeed90ae509\") " pod="calico-apiserver/calico-apiserver-595fc8fb58-pgc2v" Nov 12 20:54:34.223788 kubelet[2524]: I1112 20:54:34.223620 2524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tt7k\" (UniqueName: \"kubernetes.io/projected/73906cf1-8520-41f1-9a4b-beeed90ae509-kube-api-access-6tt7k\") pod \"calico-apiserver-595fc8fb58-pgc2v\" (UID: \"73906cf1-8520-41f1-9a4b-beeed90ae509\") " pod="calico-apiserver/calico-apiserver-595fc8fb58-pgc2v" Nov 12 20:54:34.322100 kubelet[2524]: E1112 20:54:34.321889 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:34.323311 containerd[1456]: time="2024-11-12T20:54:34.323246452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-brc2t,Uid:97b0959e-83eb-40de-b1d7-86e881d338a7,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:34.348293 containerd[1456]: time="2024-11-12T20:54:34.348218787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc5bdbb85-fd6v2,Uid:4a476cf6-4cc5-49bc-ac05-b79f0197d4f4,Namespace:calico-system,Attempt:0,}" Nov 12 20:54:34.354399 containerd[1456]: time="2024-11-12T20:54:34.354340021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595fc8fb58-pgc2v,Uid:73906cf1-8520-41f1-9a4b-beeed90ae509,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:54:34.359807 kubelet[2524]: E1112 20:54:34.359761 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:34.360890 containerd[1456]: time="2024-11-12T20:54:34.360847881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6f9q6,Uid:76abd0d6-f821-42df-bb0b-16d0b8a05a4b,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:34.365472 containerd[1456]: time="2024-11-12T20:54:34.364762906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595fc8fb58-zn62j,Uid:e60f6ba4-080e-47c8-9607-4ea565272f92,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:54:34.457821 containerd[1456]: time="2024-11-12T20:54:34.457741543Z" level=error msg="Failed to destroy network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.458476 containerd[1456]: time="2024-11-12T20:54:34.458367102Z" level=error msg="encountered an error cleaning up failed sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.458476 containerd[1456]: time="2024-11-12T20:54:34.458424105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-brc2t,Uid:97b0959e-83eb-40de-b1d7-86e881d338a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.459212 kubelet[2524]: E1112 20:54:34.459141 2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.459287 kubelet[2524]: E1112 20:54:34.459253 2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-brc2t" Nov 12 20:54:34.459287 kubelet[2524]: E1112 20:54:34.459275 2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-brc2t" Nov 12 20:54:34.459357 kubelet[2524]: E1112 20:54:34.459327 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-brc2t_kube-system(97b0959e-83eb-40de-b1d7-86e881d338a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-brc2t_kube-system(97b0959e-83eb-40de-b1d7-86e881d338a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-brc2t" podUID="97b0959e-83eb-40de-b1d7-86e881d338a7" Nov 12 20:54:34.489293 containerd[1456]: time="2024-11-12T20:54:34.489042081Z" level=error msg="Failed to destroy network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.490561 containerd[1456]: time="2024-11-12T20:54:34.490525389Z" level=error msg="encountered an error cleaning up failed sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.490701 containerd[1456]: time="2024-11-12T20:54:34.490677350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc5bdbb85-fd6v2,Uid:4a476cf6-4cc5-49bc-ac05-b79f0197d4f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.491203 kubelet[2524]: E1112 20:54:34.491152 2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.491270 kubelet[2524]: E1112 20:54:34.491237 2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cc5bdbb85-fd6v2" Nov 12 20:54:34.491270 kubelet[2524]: E1112 20:54:34.491263 2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cc5bdbb85-fd6v2" Nov 12 20:54:34.491334 kubelet[2524]: E1112 20:54:34.491308 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cc5bdbb85-fd6v2_calico-system(4a476cf6-4cc5-49bc-ac05-b79f0197d4f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cc5bdbb85-fd6v2_calico-system(4a476cf6-4cc5-49bc-ac05-b79f0197d4f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cc5bdbb85-fd6v2" podUID="4a476cf6-4cc5-49bc-ac05-b79f0197d4f4" Nov 12 20:54:34.494254 containerd[1456]: time="2024-11-12T20:54:34.494187944Z" level=error msg="Failed to destroy network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.494726 containerd[1456]: time="2024-11-12T20:54:34.494684246Z" level=error msg="encountered an error cleaning up failed sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.494774 containerd[1456]: time="2024-11-12T20:54:34.494755658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595fc8fb58-pgc2v,Uid:73906cf1-8520-41f1-9a4b-beeed90ae509,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.495112 kubelet[2524]: E1112 20:54:34.495064 2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.495165 kubelet[2524]: E1112 20:54:34.495139 2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595fc8fb58-pgc2v" Nov 12 20:54:34.495195 kubelet[2524]: E1112 20:54:34.495168 2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595fc8fb58-pgc2v" Nov 12 20:54:34.495384 kubelet[2524]: E1112 20:54:34.495227 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-595fc8fb58-pgc2v_calico-apiserver(73906cf1-8520-41f1-9a4b-beeed90ae509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-595fc8fb58-pgc2v_calico-apiserver(73906cf1-8520-41f1-9a4b-beeed90ae509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595fc8fb58-pgc2v" podUID="73906cf1-8520-41f1-9a4b-beeed90ae509" Nov 12 20:54:34.503993 containerd[1456]: time="2024-11-12T20:54:34.503935782Z" level=error msg="Failed to destroy network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.504946 containerd[1456]: time="2024-11-12T20:54:34.504881805Z" level=error msg="encountered an error cleaning up failed sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.504946 containerd[1456]: time="2024-11-12T20:54:34.504956072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595fc8fb58-zn62j,Uid:e60f6ba4-080e-47c8-9607-4ea565272f92,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.505310 kubelet[2524]: E1112 20:54:34.505245 2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.505362 kubelet[2524]: E1112 20:54:34.505341 2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595fc8fb58-zn62j" Nov 12 20:54:34.505389 kubelet[2524]: E1112 20:54:34.505371 2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595fc8fb58-zn62j" Nov 12 20:54:34.505456 kubelet[2524]: E1112 20:54:34.505424 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-595fc8fb58-zn62j_calico-apiserver(e60f6ba4-080e-47c8-9607-4ea565272f92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-595fc8fb58-zn62j_calico-apiserver(e60f6ba4-080e-47c8-9607-4ea565272f92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595fc8fb58-zn62j" podUID="e60f6ba4-080e-47c8-9607-4ea565272f92" Nov 12 20:54:34.514182 containerd[1456]: time="2024-11-12T20:54:34.514104213Z" level=error msg="Failed to destroy network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.514654 containerd[1456]: time="2024-11-12T20:54:34.514606247Z" level=error msg="encountered an error cleaning up failed sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.514716 containerd[1456]: time="2024-11-12T20:54:34.514675534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6f9q6,Uid:76abd0d6-f821-42df-bb0b-16d0b8a05a4b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.515097 kubelet[2524]: E1112 20:54:34.515015 2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.515172 kubelet[2524]: E1112 20:54:34.515098 2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6f9q6" Nov 12 20:54:34.515172 kubelet[2524]: E1112 20:54:34.515125 2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6f9q6" Nov 12 20:54:34.515261 kubelet[2524]: E1112 20:54:34.515178 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6f9q6_kube-system(76abd0d6-f821-42df-bb0b-16d0b8a05a4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6f9q6_kube-system(76abd0d6-f821-42df-bb0b-16d0b8a05a4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6f9q6" podUID="76abd0d6-f821-42df-bb0b-16d0b8a05a4b" Nov 12 20:54:34.523950 kubelet[2524]: E1112 20:54:34.523890 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:34.526029 containerd[1456]: time="2024-11-12T20:54:34.525972270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:54:34.526504 kubelet[2524]: I1112 20:54:34.526467 2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:54:34.528046 containerd[1456]: time="2024-11-12T20:54:34.527530747Z" level=info msg="StopPodSandbox for \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\"" Nov 12 20:54:34.528046 containerd[1456]: time="2024-11-12T20:54:34.527740212Z" level=info msg="Ensure that sandbox b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699 in task-service has been cleanup successfully" Nov 12 20:54:34.529290 kubelet[2524]: I1112 20:54:34.528827 2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:54:34.530044 containerd[1456]: time="2024-11-12T20:54:34.529564766Z" level=info msg="StopPodSandbox for \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\"" Nov 12 20:54:34.530044 containerd[1456]: time="2024-11-12T20:54:34.529753820Z" level=info msg="Ensure that sandbox c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4 in task-service has been cleanup successfully" Nov 12 20:54:34.531003 kubelet[2524]: I1112 20:54:34.530983 2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:54:34.531637 containerd[1456]: time="2024-11-12T20:54:34.531593635Z" level=info msg="StopPodSandbox for \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\"" Nov 12 20:54:34.531800 containerd[1456]: time="2024-11-12T20:54:34.531771006Z" level=info msg="Ensure that sandbox 3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8 in task-service has been cleanup successfully" Nov 12 20:54:34.535044 kubelet[2524]: I1112 20:54:34.535018 2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:54:34.536439 containerd[1456]: time="2024-11-12T20:54:34.536318694Z" level=info msg="StopPodSandbox for \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\"" Nov 12 20:54:34.536591 containerd[1456]: time="2024-11-12T20:54:34.536560964Z" level=info msg="Ensure that sandbox 2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b in task-service has been cleanup successfully" Nov 12 20:54:34.540962 kubelet[2524]: I1112 20:54:34.540786 2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:54:34.541966 containerd[1456]: time="2024-11-12T20:54:34.541588231Z" level=info msg="StopPodSandbox for \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\"" Nov 12 20:54:34.541966 containerd[1456]: time="2024-11-12T20:54:34.541815842Z" level=info msg="Ensure that sandbox a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4 in task-service has been cleanup successfully" Nov 12 20:54:34.583952 containerd[1456]: time="2024-11-12T20:54:34.583771826Z" level=error msg="StopPodSandbox for \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\" failed" error="failed to destroy network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.584536 kubelet[2524]: E1112 20:54:34.584350 2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:54:34.584536 kubelet[2524]: E1112 20:54:34.584417 2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b"} Nov 12 20:54:34.584536 kubelet[2524]: E1112 20:54:34.584478 2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a476cf6-4cc5-49bc-ac05-b79f0197d4f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:54:34.584536 kubelet[2524]: E1112 20:54:34.584503 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a476cf6-4cc5-49bc-ac05-b79f0197d4f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cc5bdbb85-fd6v2" podUID="4a476cf6-4cc5-49bc-ac05-b79f0197d4f4" Nov 12 20:54:34.596301 containerd[1456]: time="2024-11-12T20:54:34.596245643Z" level=error msg="StopPodSandbox for \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\" failed" error="failed to destroy network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.597366 kubelet[2524]: E1112 20:54:34.597299 2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:54:34.597366 kubelet[2524]: E1112 20:54:34.597361 2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4"} Nov 12 20:54:34.597461 kubelet[2524]: E1112 20:54:34.597388 2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73906cf1-8520-41f1-9a4b-beeed90ae509\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:54:34.598920 kubelet[2524]: E1112 20:54:34.597692 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73906cf1-8520-41f1-9a4b-beeed90ae509\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595fc8fb58-pgc2v" podUID="73906cf1-8520-41f1-9a4b-beeed90ae509" Nov 12 20:54:34.598920 kubelet[2524]: E1112 20:54:34.598756 2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:54:34.598920 kubelet[2524]: E1112 20:54:34.598812 2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699"} Nov 12 20:54:34.598920 kubelet[2524]: E1112 20:54:34.598853 2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76abd0d6-f821-42df-bb0b-16d0b8a05a4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:54:34.599108 containerd[1456]: time="2024-11-12T20:54:34.598469018Z" level=error msg="StopPodSandbox for \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\" failed" error="failed to destroy network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.599136 kubelet[2524]: E1112 20:54:34.598887 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76abd0d6-f821-42df-bb0b-16d0b8a05a4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6f9q6" podUID="76abd0d6-f821-42df-bb0b-16d0b8a05a4b" Nov 12 20:54:34.600189 containerd[1456]: time="2024-11-12T20:54:34.600139146Z" level=error msg="StopPodSandbox for \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\" failed" error="failed to destroy network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.600384 kubelet[2524]: E1112 20:54:34.600336 2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:54:34.600384 kubelet[2524]: E1112 20:54:34.600367 2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8"} Nov 12 20:54:34.600384 kubelet[2524]: E1112 20:54:34.600390 2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e60f6ba4-080e-47c8-9607-4ea565272f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:54:34.600635 kubelet[2524]: E1112 20:54:34.600412 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e60f6ba4-080e-47c8-9607-4ea565272f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595fc8fb58-zn62j" podUID="e60f6ba4-080e-47c8-9607-4ea565272f92" Nov 12 20:54:34.608541 containerd[1456]: time="2024-11-12T20:54:34.608508222Z" level=error msg="StopPodSandbox for \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\" failed" error="failed to destroy network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:34.608737 kubelet[2524]: E1112 20:54:34.608690 2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:54:34.608737 kubelet[2524]: E1112 20:54:34.608737 2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4"} Nov 12 20:54:34.608828 kubelet[2524]: E1112 20:54:34.608759 2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97b0959e-83eb-40de-b1d7-86e881d338a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:54:34.608828 kubelet[2524]: E1112 20:54:34.608776 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97b0959e-83eb-40de-b1d7-86e881d338a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-brc2t" podUID="97b0959e-83eb-40de-b1d7-86e881d338a7" Nov 12 20:54:35.080213 systemd[1]: Created slice kubepods-besteffort-pod6bb7ebc4_d76d_43f4_9467_1cf6406d5a57.slice - libcontainer container kubepods-besteffort-pod6bb7ebc4_d76d_43f4_9467_1cf6406d5a57.slice. Nov 12 20:54:35.082760 containerd[1456]: time="2024-11-12T20:54:35.082725051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzdb2,Uid:6bb7ebc4-d76d-43f4-9467-1cf6406d5a57,Namespace:calico-system,Attempt:0,}" Nov 12 20:54:35.142833 containerd[1456]: time="2024-11-12T20:54:35.142765565Z" level=error msg="Failed to destroy network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:35.143341 containerd[1456]: time="2024-11-12T20:54:35.143264221Z" level=error msg="encountered an error cleaning up failed sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:35.143379 containerd[1456]: time="2024-11-12T20:54:35.143333449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzdb2,Uid:6bb7ebc4-d76d-43f4-9467-1cf6406d5a57,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:35.143686 kubelet[2524]: E1112 20:54:35.143631 2524 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:35.143765 kubelet[2524]: E1112 20:54:35.143710 2524 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzdb2" Nov 12 20:54:35.143765 kubelet[2524]: E1112 20:54:35.143739 2524 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzdb2" Nov 12 20:54:35.143833 kubelet[2524]: E1112 20:54:35.143800 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gzdb2_calico-system(6bb7ebc4-d76d-43f4-9467-1cf6406d5a57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gzdb2_calico-system(6bb7ebc4-d76d-43f4-9467-1cf6406d5a57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gzdb2" podUID="6bb7ebc4-d76d-43f4-9467-1cf6406d5a57" Nov 12 20:54:35.145300 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc-shm.mount: Deactivated successfully. Nov 12 20:54:35.543959 kubelet[2524]: I1112 20:54:35.543886 2524 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:54:35.544583 containerd[1456]: time="2024-11-12T20:54:35.544550426Z" level=info msg="StopPodSandbox for \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\"" Nov 12 20:54:35.544787 containerd[1456]: time="2024-11-12T20:54:35.544757336Z" level=info msg="Ensure that sandbox 8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc in task-service has been cleanup successfully" Nov 12 20:54:35.577687 containerd[1456]: time="2024-11-12T20:54:35.577611773Z" level=error msg="StopPodSandbox for \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\" failed" error="failed to destroy network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:54:35.577988 kubelet[2524]: E1112 20:54:35.577937 2524 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:54:35.578058 kubelet[2524]: E1112 20:54:35.578004 2524 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc"} Nov 12 20:54:35.578058 kubelet[2524]: E1112 20:54:35.578049 2524 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:54:35.578149 kubelet[2524]: E1112 20:54:35.578074 2524 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gzdb2" podUID="6bb7ebc4-d76d-43f4-9467-1cf6406d5a57" Nov 12 20:54:39.083646 kubelet[2524]: I1112 20:54:39.083604 2524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:54:39.087385 kubelet[2524]: E1112 20:54:39.086935 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:39.551474 kubelet[2524]: E1112 20:54:39.551441 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:40.544724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418844212.mount: Deactivated successfully. Nov 12 20:54:43.035447 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:41602.service - OpenSSH per-connection server daemon (10.0.0.1:41602). Nov 12 20:54:43.454774 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 41602 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:43.457194 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:43.463025 systemd-logind[1438]: New session 8 of user core. Nov 12 20:54:43.467223 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:54:43.473790 containerd[1456]: time="2024-11-12T20:54:43.472315640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:54:43.482633 containerd[1456]: time="2024-11-12T20:54:43.482429476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 8.956190298s" Nov 12 20:54:43.482633 containerd[1456]: time="2024-11-12T20:54:43.482495917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:54:43.485500 containerd[1456]: time="2024-11-12T20:54:43.485425466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:43.486634 containerd[1456]: time="2024-11-12T20:54:43.486438091Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:43.487182 containerd[1456]: time="2024-11-12T20:54:43.487110328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:43.509169 containerd[1456]: time="2024-11-12T20:54:43.509096827Z" level=info msg="CreateContainer within sandbox \"392f484ad765d3d3135c4b5739094109ec052a5b86bebf31487def54a49cad73\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:54:43.547298 containerd[1456]: time="2024-11-12T20:54:43.547214693Z" level=info msg="CreateContainer within sandbox \"392f484ad765d3d3135c4b5739094109ec052a5b86bebf31487def54a49cad73\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"232ed71ebae06c5a6f1c91f9c226e3ee0c0c44d55fd8dd08bf78764d1485e22a\"" Nov 12 20:54:43.548471 containerd[1456]: time="2024-11-12T20:54:43.548417811Z" level=info msg="StartContainer for \"232ed71ebae06c5a6f1c91f9c226e3ee0c0c44d55fd8dd08bf78764d1485e22a\"" Nov 12 20:54:43.632112 systemd[1]: Started cri-containerd-232ed71ebae06c5a6f1c91f9c226e3ee0c0c44d55fd8dd08bf78764d1485e22a.scope - libcontainer container 232ed71ebae06c5a6f1c91f9c226e3ee0c0c44d55fd8dd08bf78764d1485e22a. Nov 12 20:54:43.645427 sshd[3588]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:43.648663 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:41602.service: Deactivated successfully. Nov 12 20:54:43.651611 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:54:43.652713 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:54:43.654008 systemd-logind[1438]: Removed session 8. Nov 12 20:54:43.675887 containerd[1456]: time="2024-11-12T20:54:43.675827661Z" level=info msg="StartContainer for \"232ed71ebae06c5a6f1c91f9c226e3ee0c0c44d55fd8dd08bf78764d1485e22a\" returns successfully" Nov 12 20:54:43.843539 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:54:43.843715 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:54:44.566031 kubelet[2524]: E1112 20:54:44.565991 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:44.614620 kubelet[2524]: I1112 20:54:44.614092 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h2prm" podStartSLOduration=2.629836975 podStartE2EDuration="23.614073233s" podCreationTimestamp="2024-11-12 20:54:21 +0000 UTC" firstStartedPulling="2024-11-12 20:54:22.501136576 +0000 UTC m=+16.552884819" lastFinishedPulling="2024-11-12 20:54:43.485372834 +0000 UTC m=+37.537121077" observedRunningTime="2024-11-12 20:54:44.614003347 +0000 UTC m=+38.665751600" watchObservedRunningTime="2024-11-12 20:54:44.614073233 +0000 UTC m=+38.665821476" Nov 12 20:54:45.076091 containerd[1456]: time="2024-11-12T20:54:45.076018721Z" level=info msg="StopPodSandbox for \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\"" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.136 [INFO][3691] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.136 [INFO][3691] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" iface="eth0" netns="/var/run/netns/cni-dd7df945-5d77-2d30-bbb5-fb5e9ae71358" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.137 [INFO][3691] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" iface="eth0" netns="/var/run/netns/cni-dd7df945-5d77-2d30-bbb5-fb5e9ae71358" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.137 [INFO][3691] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" iface="eth0" netns="/var/run/netns/cni-dd7df945-5d77-2d30-bbb5-fb5e9ae71358" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.137 [INFO][3691] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.137 [INFO][3691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.194 [INFO][3699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" HandleID="k8s-pod-network.2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.194 [INFO][3699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.194 [INFO][3699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.201 [WARNING][3699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" HandleID="k8s-pod-network.2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.201 [INFO][3699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" HandleID="k8s-pod-network.2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.203 [INFO][3699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:45.211007 containerd[1456]: 2024-11-12 20:54:45.207 [INFO][3691] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:54:45.211447 containerd[1456]: time="2024-11-12T20:54:45.211266210Z" level=info msg="TearDown network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\" successfully" Nov 12 20:54:45.211447 containerd[1456]: time="2024-11-12T20:54:45.211304695Z" level=info msg="StopPodSandbox for \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\" returns successfully" Nov 12 20:54:45.212922 containerd[1456]: time="2024-11-12T20:54:45.212414455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc5bdbb85-fd6v2,Uid:4a476cf6-4cc5-49bc-ac05-b79f0197d4f4,Namespace:calico-system,Attempt:1,}" Nov 12 20:54:45.214971 systemd[1]: run-netns-cni\x2ddd7df945\x2d5d77\x2d2d30\x2dbbb5\x2dfb5e9ae71358.mount: Deactivated successfully. Nov 12 20:54:45.437890 systemd-networkd[1395]: cali45e726cee75: Link UP Nov 12 20:54:45.438176 systemd-networkd[1395]: cali45e726cee75: Gained carrier Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.290 [INFO][3751] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.312 [INFO][3751] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0 calico-kube-controllers-6cc5bdbb85- calico-system 4a476cf6-4cc5-49bc-ac05-b79f0197d4f4 800 0 2024-11-12 20:54:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cc5bdbb85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6cc5bdbb85-fd6v2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali45e726cee75 [] []}} ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Namespace="calico-system" Pod="calico-kube-controllers-6cc5bdbb85-fd6v2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.312 [INFO][3751] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Namespace="calico-system" Pod="calico-kube-controllers-6cc5bdbb85-fd6v2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.375 [INFO][3809] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" HandleID="k8s-pod-network.31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.384 [INFO][3809] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" HandleID="k8s-pod-network.31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011e100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6cc5bdbb85-fd6v2", "timestamp":"2024-11-12 20:54:45.375185873 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.385 [INFO][3809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.385 [INFO][3809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.385 [INFO][3809] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.391 [INFO][3809] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" host="localhost" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.398 [INFO][3809] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.403 [INFO][3809] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.409 [INFO][3809] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.412 [INFO][3809] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.412 [INFO][3809] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" host="localhost" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.413 [INFO][3809] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6 Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.416 [INFO][3809] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" host="localhost" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.423 [INFO][3809] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" host="localhost" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.423 [INFO][3809] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" host="localhost" Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.423 [INFO][3809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:45.464381 containerd[1456]: 2024-11-12 20:54:45.423 [INFO][3809] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" HandleID="k8s-pod-network.31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.465578 containerd[1456]: 2024-11-12 20:54:45.427 [INFO][3751] cni-plugin/k8s.go 386: Populated endpoint ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Namespace="calico-system" Pod="calico-kube-controllers-6cc5bdbb85-fd6v2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0", GenerateName:"calico-kube-controllers-6cc5bdbb85-", Namespace:"calico-system", SelfLink:"", UID:"4a476cf6-4cc5-49bc-ac05-b79f0197d4f4", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cc5bdbb85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6cc5bdbb85-fd6v2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45e726cee75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:45.465578 containerd[1456]: 2024-11-12 20:54:45.427 [INFO][3751] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Namespace="calico-system" Pod="calico-kube-controllers-6cc5bdbb85-fd6v2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.465578 containerd[1456]: 2024-11-12 20:54:45.427 [INFO][3751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45e726cee75 ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Namespace="calico-system" Pod="calico-kube-controllers-6cc5bdbb85-fd6v2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.465578 containerd[1456]: 2024-11-12 20:54:45.439 [INFO][3751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Namespace="calico-system" Pod="calico-kube-controllers-6cc5bdbb85-fd6v2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.465578 containerd[1456]: 2024-11-12 20:54:45.440 [INFO][3751] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Namespace="calico-system" Pod="calico-kube-controllers-6cc5bdbb85-fd6v2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0", GenerateName:"calico-kube-controllers-6cc5bdbb85-", Namespace:"calico-system", SelfLink:"", UID:"4a476cf6-4cc5-49bc-ac05-b79f0197d4f4", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cc5bdbb85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6", Pod:"calico-kube-controllers-6cc5bdbb85-fd6v2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45e726cee75", MAC:"d6:6c:53:e1:59:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:45.465578 containerd[1456]: 2024-11-12 20:54:45.455 [INFO][3751] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6" Namespace="calico-system" Pod="calico-kube-controllers-6cc5bdbb85-fd6v2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:54:45.530940 kernel: bpftool[3869]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:54:45.543365 containerd[1456]: time="2024-11-12T20:54:45.542436049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:45.543365 containerd[1456]: time="2024-11-12T20:54:45.543323936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:45.543365 containerd[1456]: time="2024-11-12T20:54:45.543342372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:45.543594 containerd[1456]: time="2024-11-12T20:54:45.543463569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:45.572086 systemd[1]: Started cri-containerd-31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6.scope - libcontainer container 31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6. Nov 12 20:54:45.589866 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:54:45.619662 containerd[1456]: time="2024-11-12T20:54:45.619600695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc5bdbb85-fd6v2,Uid:4a476cf6-4cc5-49bc-ac05-b79f0197d4f4,Namespace:calico-system,Attempt:1,} returns sandbox id \"31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6\"" Nov 12 20:54:45.622163 containerd[1456]: time="2024-11-12T20:54:45.622127456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:54:45.790482 systemd-networkd[1395]: vxlan.calico: Link UP Nov 12 20:54:45.790491 systemd-networkd[1395]: vxlan.calico: Gained carrier Nov 12 20:54:46.080677 containerd[1456]: time="2024-11-12T20:54:46.079882327Z" level=info msg="StopPodSandbox for \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\"" Nov 12 20:54:46.082679 containerd[1456]: time="2024-11-12T20:54:46.082274750Z" level=info msg="StopPodSandbox for \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\"" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.153 [INFO][4006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.153 [INFO][4006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" iface="eth0" netns="/var/run/netns/cni-267cc677-78a9-717f-e477-616e532b52f2" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.154 [INFO][4006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" iface="eth0" netns="/var/run/netns/cni-267cc677-78a9-717f-e477-616e532b52f2" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.154 [INFO][4006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" iface="eth0" netns="/var/run/netns/cni-267cc677-78a9-717f-e477-616e532b52f2" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.154 [INFO][4006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.154 [INFO][4006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.184 [INFO][4026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" HandleID="k8s-pod-network.a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.184 [INFO][4026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.184 [INFO][4026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.191 [WARNING][4026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" HandleID="k8s-pod-network.a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.191 [INFO][4026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" HandleID="k8s-pod-network.a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.194 [INFO][4026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:46.200454 containerd[1456]: 2024-11-12 20:54:46.197 [INFO][4006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:54:46.203087 containerd[1456]: time="2024-11-12T20:54:46.203030694Z" level=info msg="TearDown network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\" successfully" Nov 12 20:54:46.203087 containerd[1456]: time="2024-11-12T20:54:46.203078097Z" level=info msg="StopPodSandbox for \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\" returns successfully" Nov 12 20:54:46.203611 kubelet[2524]: E1112 20:54:46.203564 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:46.206327 containerd[1456]: time="2024-11-12T20:54:46.206264191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-brc2t,Uid:97b0959e-83eb-40de-b1d7-86e881d338a7,Namespace:kube-system,Attempt:1,}" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.157 [INFO][4010] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.157 [INFO][4010] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" iface="eth0" netns="/var/run/netns/cni-eb491148-fee5-e5ab-ce6e-4fb0423b5648" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.157 [INFO][4010] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" iface="eth0" netns="/var/run/netns/cni-eb491148-fee5-e5ab-ce6e-4fb0423b5648" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.158 [INFO][4010] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" iface="eth0" netns="/var/run/netns/cni-eb491148-fee5-e5ab-ce6e-4fb0423b5648" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.158 [INFO][4010] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.158 [INFO][4010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.186 [INFO][4031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" HandleID="k8s-pod-network.3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.186 [INFO][4031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.194 [INFO][4031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.202 [WARNING][4031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" HandleID="k8s-pod-network.3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.202 [INFO][4031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" HandleID="k8s-pod-network.3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.203 [INFO][4031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:46.212868 containerd[1456]: 2024-11-12 20:54:46.209 [INFO][4010] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:54:46.213419 containerd[1456]: time="2024-11-12T20:54:46.213091447Z" level=info msg="TearDown network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\" successfully" Nov 12 20:54:46.213419 containerd[1456]: time="2024-11-12T20:54:46.213127398Z" level=info msg="StopPodSandbox for \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\" returns successfully" Nov 12 20:54:46.214158 containerd[1456]: time="2024-11-12T20:54:46.214111059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595fc8fb58-zn62j,Uid:e60f6ba4-080e-47c8-9607-4ea565272f92,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:54:46.214505 systemd[1]: run-netns-cni\x2d267cc677\x2d78a9\x2d717f\x2de477\x2d616e532b52f2.mount: Deactivated successfully. Nov 12 20:54:46.217279 systemd[1]: run-netns-cni\x2deb491148\x2dfee5\x2de5ab\x2dce6e\x2d4fb0423b5648.mount: Deactivated successfully. Nov 12 20:54:46.247630 kubelet[2524]: I1112 20:54:46.247590 2524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:54:46.250392 kubelet[2524]: E1112 20:54:46.248483 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:46.492428 systemd-networkd[1395]: calia416421b2af: Link UP Nov 12 20:54:46.493558 systemd-networkd[1395]: calia416421b2af: Gained carrier Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.299 [INFO][4048] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0 calico-apiserver-595fc8fb58- calico-apiserver e60f6ba4-080e-47c8-9607-4ea565272f92 810 0 2024-11-12 20:54:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:595fc8fb58 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-595fc8fb58-zn62j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia416421b2af [] []}} ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-zn62j" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.300 [INFO][4048] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-zn62j" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.344 [INFO][4077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" HandleID="k8s-pod-network.801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.358 [INFO][4077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" HandleID="k8s-pod-network.801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005020b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-595fc8fb58-zn62j", "timestamp":"2024-11-12 20:54:46.344894682 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.358 [INFO][4077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.358 [INFO][4077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.358 [INFO][4077] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.361 [INFO][4077] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" host="localhost" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.452 [INFO][4077] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.458 [INFO][4077] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.460 [INFO][4077] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.463 [INFO][4077] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.465 [INFO][4077] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" host="localhost" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.468 [INFO][4077] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663 Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.473 [INFO][4077] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" host="localhost" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.483 [INFO][4077] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" host="localhost" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.483 [INFO][4077] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" host="localhost" Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.483 [INFO][4077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:46.510624 containerd[1456]: 2024-11-12 20:54:46.483 [INFO][4077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" HandleID="k8s-pod-network.801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.511328 containerd[1456]: 2024-11-12 20:54:46.488 [INFO][4048] cni-plugin/k8s.go 386: Populated endpoint ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-zn62j" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0", GenerateName:"calico-apiserver-595fc8fb58-", Namespace:"calico-apiserver", SelfLink:"", UID:"e60f6ba4-080e-47c8-9607-4ea565272f92", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595fc8fb58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-595fc8fb58-zn62j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia416421b2af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:46.511328 containerd[1456]: 2024-11-12 20:54:46.489 [INFO][4048] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-zn62j" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.511328 containerd[1456]: 2024-11-12 20:54:46.489 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia416421b2af ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-zn62j" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.511328 containerd[1456]: 2024-11-12 20:54:46.492 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-zn62j" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.511328 containerd[1456]: 2024-11-12 20:54:46.493 [INFO][4048] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-zn62j" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0", GenerateName:"calico-apiserver-595fc8fb58-", Namespace:"calico-apiserver", SelfLink:"", UID:"e60f6ba4-080e-47c8-9607-4ea565272f92", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595fc8fb58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663", Pod:"calico-apiserver-595fc8fb58-zn62j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia416421b2af", MAC:"5e:1f:24:73:39:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:46.511328 containerd[1456]: 2024-11-12 20:54:46.506 [INFO][4048] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-zn62j" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:54:46.544650 containerd[1456]: time="2024-11-12T20:54:46.544490887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:46.544650 containerd[1456]: time="2024-11-12T20:54:46.544572457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:46.544650 containerd[1456]: time="2024-11-12T20:54:46.544586986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:46.545546 containerd[1456]: time="2024-11-12T20:54:46.545057755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:46.572142 systemd[1]: Started cri-containerd-801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663.scope - libcontainer container 801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663. Nov 12 20:54:46.593043 systemd-networkd[1395]: calid5f427d7e84: Link UP Nov 12 20:54:46.593617 systemd-networkd[1395]: calid5f427d7e84: Gained carrier Nov 12 20:54:46.594245 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:54:46.624805 containerd[1456]: time="2024-11-12T20:54:46.624741986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595fc8fb58-zn62j,Uid:e60f6ba4-080e-47c8-9607-4ea565272f92,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663\"" Nov 12 20:54:46.652130 systemd-networkd[1395]: cali45e726cee75: Gained IPv6LL Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.296 [INFO][4042] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--brc2t-eth0 coredns-6f6b679f8f- kube-system 97b0959e-83eb-40de-b1d7-86e881d338a7 809 0 2024-11-12 20:54:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-brc2t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid5f427d7e84 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Namespace="kube-system" Pod="coredns-6f6b679f8f-brc2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--brc2t-" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.296 [INFO][4042] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Namespace="kube-system" Pod="coredns-6f6b679f8f-brc2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.337 [INFO][4071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" HandleID="k8s-pod-network.9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.361 [INFO][4071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" HandleID="k8s-pod-network.9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e0e20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-brc2t", "timestamp":"2024-11-12 20:54:46.337026992 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.361 [INFO][4071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.483 [INFO][4071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.485 [INFO][4071] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.488 [INFO][4071] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" host="localhost" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.553 [INFO][4071] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.563 [INFO][4071] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.566 [INFO][4071] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.570 [INFO][4071] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.570 [INFO][4071] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" host="localhost" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.573 [INFO][4071] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.577 [INFO][4071] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" host="localhost" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.584 [INFO][4071] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" host="localhost" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.584 [INFO][4071] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" host="localhost" Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.584 [INFO][4071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:46.676233 containerd[1456]: 2024-11-12 20:54:46.584 [INFO][4071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" HandleID="k8s-pod-network.9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.676919 containerd[1456]: 2024-11-12 20:54:46.589 [INFO][4042] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Namespace="kube-system" Pod="coredns-6f6b679f8f-brc2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--brc2t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"97b0959e-83eb-40de-b1d7-86e881d338a7", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-brc2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5f427d7e84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:46.676919 containerd[1456]: 2024-11-12 20:54:46.589 [INFO][4042] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Namespace="kube-system" Pod="coredns-6f6b679f8f-brc2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.676919 containerd[1456]: 2024-11-12 20:54:46.589 [INFO][4042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5f427d7e84 ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Namespace="kube-system" Pod="coredns-6f6b679f8f-brc2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.676919 containerd[1456]: 2024-11-12 20:54:46.592 [INFO][4042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Namespace="kube-system" Pod="coredns-6f6b679f8f-brc2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.676919 containerd[1456]: 2024-11-12 20:54:46.593 [INFO][4042] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Namespace="kube-system" Pod="coredns-6f6b679f8f-brc2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--brc2t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"97b0959e-83eb-40de-b1d7-86e881d338a7", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb", Pod:"coredns-6f6b679f8f-brc2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5f427d7e84", MAC:"6a:99:ae:33:1f:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:46.676919 containerd[1456]: 2024-11-12 20:54:46.673 [INFO][4042] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb" Namespace="kube-system" Pod="coredns-6f6b679f8f-brc2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:54:46.845235 containerd[1456]: time="2024-11-12T20:54:46.844882985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:46.845235 containerd[1456]: time="2024-11-12T20:54:46.845057347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:46.845235 containerd[1456]: time="2024-11-12T20:54:46.845073929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:46.845445 containerd[1456]: time="2024-11-12T20:54:46.845187962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:46.867219 systemd[1]: Started cri-containerd-9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb.scope - libcontainer container 9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb. Nov 12 20:54:46.882251 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:54:46.909053 containerd[1456]: time="2024-11-12T20:54:46.909007541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-brc2t,Uid:97b0959e-83eb-40de-b1d7-86e881d338a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb\"" Nov 12 20:54:46.909766 kubelet[2524]: E1112 20:54:46.909740 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:46.925057 containerd[1456]: time="2024-11-12T20:54:46.925010343Z" level=info msg="CreateContainer within sandbox \"9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:54:46.960929 containerd[1456]: time="2024-11-12T20:54:46.960857208Z" level=info msg="CreateContainer within sandbox \"9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a9189280ac0c647c59e9e2b77545c6fa2418a9bf859b3e0076b43daabc6bd05\"" Nov 12 20:54:46.961649 containerd[1456]: time="2024-11-12T20:54:46.961591933Z" level=info msg="StartContainer for \"8a9189280ac0c647c59e9e2b77545c6fa2418a9bf859b3e0076b43daabc6bd05\"" Nov 12 20:54:46.991046 systemd[1]: Started cri-containerd-8a9189280ac0c647c59e9e2b77545c6fa2418a9bf859b3e0076b43daabc6bd05.scope - libcontainer container 8a9189280ac0c647c59e9e2b77545c6fa2418a9bf859b3e0076b43daabc6bd05. Nov 12 20:54:47.028089 containerd[1456]: time="2024-11-12T20:54:47.028036662Z" level=info msg="StartContainer for \"8a9189280ac0c647c59e9e2b77545c6fa2418a9bf859b3e0076b43daabc6bd05\" returns successfully" Nov 12 20:54:47.075501 containerd[1456]: time="2024-11-12T20:54:47.075441149Z" level=info msg="StopPodSandbox for \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\"" Nov 12 20:54:47.100647 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.122 [INFO][4295] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.122 [INFO][4295] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" iface="eth0" netns="/var/run/netns/cni-6b007bcd-157c-4a31-798f-ff7ae9ab07e9" Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.123 [INFO][4295] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" iface="eth0" netns="/var/run/netns/cni-6b007bcd-157c-4a31-798f-ff7ae9ab07e9" Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.123 [INFO][4295] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" iface="eth0" netns="/var/run/netns/cni-6b007bcd-157c-4a31-798f-ff7ae9ab07e9" Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.123 [INFO][4295] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.123 [INFO][4295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.145 [INFO][4303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" HandleID="k8s-pod-network.8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.146 [INFO][4303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.146 [INFO][4303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.152 [WARNING][4303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" HandleID="k8s-pod-network.8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.152 [INFO][4303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" HandleID="k8s-pod-network.8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.154 [INFO][4303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:47.159222 containerd[1456]: 2024-11-12 20:54:47.156 [INFO][4295] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:54:47.160018 containerd[1456]: time="2024-11-12T20:54:47.159447107Z" level=info msg="TearDown network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\" successfully" Nov 12 20:54:47.160018 containerd[1456]: time="2024-11-12T20:54:47.159484971Z" level=info msg="StopPodSandbox for \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\" returns successfully" Nov 12 20:54:47.160397 containerd[1456]: time="2024-11-12T20:54:47.160364638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzdb2,Uid:6bb7ebc4-d76d-43f4-9467-1cf6406d5a57,Namespace:calico-system,Attempt:1,}" Nov 12 20:54:47.219605 systemd[1]: run-netns-cni\x2d6b007bcd\x2d157c\x2d4a31\x2d798f\x2dff7ae9ab07e9.mount: Deactivated successfully. Nov 12 20:54:47.344776 systemd-networkd[1395]: calib59f19c76d7: Link UP Nov 12 20:54:47.345053 systemd-networkd[1395]: calib59f19c76d7: Gained carrier Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.214 [INFO][4312] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gzdb2-eth0 csi-node-driver- calico-system 6bb7ebc4-d76d-43f4-9467-1cf6406d5a57 834 0 2024-11-12 20:54:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:548d65b7bf k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gzdb2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib59f19c76d7 [] []}} ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Namespace="calico-system" Pod="csi-node-driver-gzdb2" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzdb2-" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.215 [INFO][4312] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Namespace="calico-system" Pod="csi-node-driver-gzdb2" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.275 [INFO][4328] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" HandleID="k8s-pod-network.3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.289 [INFO][4328] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" HandleID="k8s-pod-network.3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gzdb2", "timestamp":"2024-11-12 20:54:47.27581249 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.289 [INFO][4328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.289 [INFO][4328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.289 [INFO][4328] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.291 [INFO][4328] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" host="localhost" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.303 [INFO][4328] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.320 [INFO][4328] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.322 [INFO][4328] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.324 [INFO][4328] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.324 [INFO][4328] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" host="localhost" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.325 [INFO][4328] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5 Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.331 [INFO][4328] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" host="localhost" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.337 [INFO][4328] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" host="localhost" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.338 [INFO][4328] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" host="localhost" Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.338 [INFO][4328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:47.397030 containerd[1456]: 2024-11-12 20:54:47.338 [INFO][4328] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" HandleID="k8s-pod-network.3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.397731 containerd[1456]: 2024-11-12 20:54:47.341 [INFO][4312] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Namespace="calico-system" Pod="csi-node-driver-gzdb2" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzdb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzdb2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gzdb2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib59f19c76d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:47.397731 containerd[1456]: 2024-11-12 20:54:47.341 [INFO][4312] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Namespace="calico-system" Pod="csi-node-driver-gzdb2" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.397731 containerd[1456]: 2024-11-12 20:54:47.341 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib59f19c76d7 ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Namespace="calico-system" Pod="csi-node-driver-gzdb2" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.397731 containerd[1456]: 2024-11-12 20:54:47.344 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Namespace="calico-system" Pod="csi-node-driver-gzdb2" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.397731 containerd[1456]: 2024-11-12 20:54:47.345 [INFO][4312] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Namespace="calico-system" Pod="csi-node-driver-gzdb2" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzdb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzdb2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5", Pod:"csi-node-driver-gzdb2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib59f19c76d7", MAC:"5e:f3:aa:df:75:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:47.397731 containerd[1456]: 2024-11-12 20:54:47.392 [INFO][4312] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5" Namespace="calico-system" Pod="csi-node-driver-gzdb2" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:54:47.547882 containerd[1456]: time="2024-11-12T20:54:47.545841890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:47.547882 containerd[1456]: time="2024-11-12T20:54:47.545943317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:47.547882 containerd[1456]: time="2024-11-12T20:54:47.545970501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:47.547882 containerd[1456]: time="2024-11-12T20:54:47.546080145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:47.580065 systemd[1]: Started cri-containerd-3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5.scope - libcontainer container 3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5. Nov 12 20:54:47.583685 kubelet[2524]: E1112 20:54:47.583651 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:47.600512 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:54:47.612566 containerd[1456]: time="2024-11-12T20:54:47.612524641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzdb2,Uid:6bb7ebc4-d76d-43f4-9467-1cf6406d5a57,Namespace:calico-system,Attempt:1,} returns sandbox id \"3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5\"" Nov 12 20:54:47.936419 kubelet[2524]: I1112 20:54:47.936321 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-brc2t" podStartSLOduration=34.93629391 podStartE2EDuration="34.93629391s" podCreationTimestamp="2024-11-12 20:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:47.684047406 +0000 UTC m=+41.735795639" watchObservedRunningTime="2024-11-12 20:54:47.93629391 +0000 UTC m=+41.988042154" Nov 12 20:54:48.000840 containerd[1456]: time="2024-11-12T20:54:48.000764282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:48.001724 containerd[1456]: time="2024-11-12T20:54:48.001679187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:54:48.003189 containerd[1456]: time="2024-11-12T20:54:48.003155506Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:48.006830 containerd[1456]: time="2024-11-12T20:54:48.006804180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:48.007682 containerd[1456]: time="2024-11-12T20:54:48.007638918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.385472886s" Nov 12 20:54:48.007725 containerd[1456]: time="2024-11-12T20:54:48.007684586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:54:48.008672 containerd[1456]: time="2024-11-12T20:54:48.008649578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:54:48.019195 containerd[1456]: time="2024-11-12T20:54:48.019154201Z" level=info msg="CreateContainer within sandbox \"31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:54:48.076744 containerd[1456]: time="2024-11-12T20:54:48.075860500Z" level=info msg="StopPodSandbox for \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\"" Nov 12 20:54:48.076744 containerd[1456]: time="2024-11-12T20:54:48.076030041Z" level=info msg="StopPodSandbox for \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\"" Nov 12 20:54:48.252149 systemd-networkd[1395]: calid5f427d7e84: Gained IPv6LL Nov 12 20:54:48.508139 systemd-networkd[1395]: calia416421b2af: Gained IPv6LL Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.342 [INFO][4439] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.343 [INFO][4439] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" iface="eth0" netns="/var/run/netns/cni-a85b0397-4ba4-9316-0fbc-1b1efdcd44fd" Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.343 [INFO][4439] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" iface="eth0" netns="/var/run/netns/cni-a85b0397-4ba4-9316-0fbc-1b1efdcd44fd" Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.343 [INFO][4439] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" iface="eth0" netns="/var/run/netns/cni-a85b0397-4ba4-9316-0fbc-1b1efdcd44fd" Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.343 [INFO][4439] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.343 [INFO][4439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.443 [INFO][4452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" HandleID="k8s-pod-network.b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.443 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.443 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.525 [WARNING][4452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" HandleID="k8s-pod-network.b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.525 [INFO][4452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" HandleID="k8s-pod-network.b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.527 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:48.533042 containerd[1456]: 2024-11-12 20:54:48.529 [INFO][4439] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:54:48.534222 containerd[1456]: time="2024-11-12T20:54:48.534031357Z" level=info msg="TearDown network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\" successfully" Nov 12 20:54:48.534222 containerd[1456]: time="2024-11-12T20:54:48.534065444Z" level=info msg="StopPodSandbox for \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\" returns successfully" Nov 12 20:54:48.535571 kubelet[2524]: E1112 20:54:48.535349 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:48.535996 containerd[1456]: time="2024-11-12T20:54:48.535768895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6f9q6,Uid:76abd0d6-f821-42df-bb0b-16d0b8a05a4b,Namespace:kube-system,Attempt:1,}" Nov 12 20:54:48.536375 systemd[1]: run-netns-cni\x2da85b0397\x2d4ba4\x2d9316\x2d0fbc\x2d1b1efdcd44fd.mount: Deactivated successfully. Nov 12 20:54:48.539067 containerd[1456]: time="2024-11-12T20:54:48.539007379Z" level=info msg="CreateContainer within sandbox \"31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f8055826421c2b62f129b63d97897c4b37317a772e7fb051e1fc0df6f7df104d\"" Nov 12 20:54:48.539622 containerd[1456]: time="2024-11-12T20:54:48.539578774Z" level=info msg="StartContainer for \"f8055826421c2b62f129b63d97897c4b37317a772e7fb051e1fc0df6f7df104d\"" Nov 12 20:54:48.579130 systemd[1]: Started cri-containerd-f8055826421c2b62f129b63d97897c4b37317a772e7fb051e1fc0df6f7df104d.scope - libcontainer container f8055826421c2b62f129b63d97897c4b37317a772e7fb051e1fc0df6f7df104d. Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.524 [INFO][4433] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.524 [INFO][4433] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" iface="eth0" netns="/var/run/netns/cni-397d9341-517f-c1b0-e022-9fbb811d974e" Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.525 [INFO][4433] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" iface="eth0" netns="/var/run/netns/cni-397d9341-517f-c1b0-e022-9fbb811d974e" Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.525 [INFO][4433] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" iface="eth0" netns="/var/run/netns/cni-397d9341-517f-c1b0-e022-9fbb811d974e" Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.526 [INFO][4433] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.526 [INFO][4433] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.558 [INFO][4459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" HandleID="k8s-pod-network.c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.558 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.558 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.566 [WARNING][4459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" HandleID="k8s-pod-network.c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.567 [INFO][4459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" HandleID="k8s-pod-network.c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.569 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:48.584117 containerd[1456]: 2024-11-12 20:54:48.578 [INFO][4433] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:54:48.584536 containerd[1456]: time="2024-11-12T20:54:48.584310054Z" level=info msg="TearDown network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\" successfully" Nov 12 20:54:48.584536 containerd[1456]: time="2024-11-12T20:54:48.584337076Z" level=info msg="StopPodSandbox for \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\" returns successfully" Nov 12 20:54:48.585368 containerd[1456]: time="2024-11-12T20:54:48.585018525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595fc8fb58-pgc2v,Uid:73906cf1-8520-41f1-9a4b-beeed90ae509,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:54:48.596555 kubelet[2524]: E1112 20:54:48.596492 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:48.664671 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:60558.service - OpenSSH per-connection server daemon (10.0.0.1:60558). Nov 12 20:54:48.892840 systemd-networkd[1395]: calib59f19c76d7: Gained IPv6LL Nov 12 20:54:48.917179 systemd-networkd[1395]: calic98a2aca817: Link UP Nov 12 20:54:48.917468 systemd-networkd[1395]: calic98a2aca817: Gained carrier Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.612 [INFO][4483] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0 coredns-6f6b679f8f- kube-system 76abd0d6-f821-42df-bb0b-16d0b8a05a4b 854 0 2024-11-12 20:54:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-6f9q6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic98a2aca817 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Namespace="kube-system" Pod="coredns-6f6b679f8f-6f9q6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6f9q6-" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.613 [INFO][4483] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Namespace="kube-system" Pod="coredns-6f6b679f8f-6f9q6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.652 [INFO][4514] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" HandleID="k8s-pod-network.a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.669 [INFO][4514] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" HandleID="k8s-pod-network.a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309200), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-6f9q6", "timestamp":"2024-11-12 20:54:48.652861599 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.669 [INFO][4514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.669 [INFO][4514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.669 [INFO][4514] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.672 [INFO][4514] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" host="localhost" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.677 [INFO][4514] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.684 [INFO][4514] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.706 [INFO][4514] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.709 [INFO][4514] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.709 [INFO][4514] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" host="localhost" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.720 [INFO][4514] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38 Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.768 [INFO][4514] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" host="localhost" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.910 [INFO][4514] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" host="localhost" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.910 [INFO][4514] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" host="localhost" Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.910 [INFO][4514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:49.189837 containerd[1456]: 2024-11-12 20:54:48.910 [INFO][4514] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" HandleID="k8s-pod-network.a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:49.190425 containerd[1456]: 2024-11-12 20:54:48.913 [INFO][4483] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Namespace="kube-system" Pod="coredns-6f6b679f8f-6f9q6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"76abd0d6-f821-42df-bb0b-16d0b8a05a4b", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-6f9q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic98a2aca817", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:49.190425 containerd[1456]: 2024-11-12 20:54:48.913 [INFO][4483] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Namespace="kube-system" Pod="coredns-6f6b679f8f-6f9q6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:49.190425 containerd[1456]: 2024-11-12 20:54:48.914 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic98a2aca817 ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Namespace="kube-system" Pod="coredns-6f6b679f8f-6f9q6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:49.190425 containerd[1456]: 2024-11-12 20:54:48.916 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Namespace="kube-system" Pod="coredns-6f6b679f8f-6f9q6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:49.190425 containerd[1456]: 2024-11-12 20:54:48.916 [INFO][4483] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Namespace="kube-system" Pod="coredns-6f6b679f8f-6f9q6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"76abd0d6-f821-42df-bb0b-16d0b8a05a4b", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38", Pod:"coredns-6f6b679f8f-6f9q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic98a2aca817", MAC:"d2:7f:91:4e:2e:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:49.190425 containerd[1456]: 2024-11-12 20:54:49.164 [INFO][4483] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38" Namespace="kube-system" Pod="coredns-6f6b679f8f-6f9q6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:54:49.216834 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 60558 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:49.218881 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:49.223552 systemd-logind[1438]: New session 9 of user core. Nov 12 20:54:49.233123 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:54:49.327827 systemd[1]: run-netns-cni\x2d397d9341\x2d517f\x2dc1b0\x2de022\x2d9fbb811d974e.mount: Deactivated successfully. Nov 12 20:54:49.358294 containerd[1456]: time="2024-11-12T20:54:49.358233157Z" level=info msg="StartContainer for \"f8055826421c2b62f129b63d97897c4b37317a772e7fb051e1fc0df6f7df104d\" returns successfully" Nov 12 20:54:49.472265 systemd-journald[1124]: Under memory pressure, flushing caches. Nov 12 20:54:49.470751 systemd-networkd[1395]: cali6012a33e05e: Link UP Nov 12 20:54:49.471940 sshd[4535]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:49.472676 systemd-networkd[1395]: cali6012a33e05e: Gained carrier Nov 12 20:54:49.478869 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:60558.service: Deactivated successfully. Nov 12 20:54:49.481316 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:54:49.482094 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:54:49.483949 systemd-logind[1438]: Removed session 9. Nov 12 20:54:49.585178 containerd[1456]: time="2024-11-12T20:54:49.584845457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:49.585983 containerd[1456]: time="2024-11-12T20:54:49.585077951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:49.585983 containerd[1456]: time="2024-11-12T20:54:49.585170400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:49.585983 containerd[1456]: time="2024-11-12T20:54:49.585422732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:48.654 [INFO][4502] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0 calico-apiserver-595fc8fb58- calico-apiserver 73906cf1-8520-41f1-9a4b-beeed90ae509 855 0 2024-11-12 20:54:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:595fc8fb58 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-595fc8fb58-pgc2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6012a33e05e [] []}} ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-pgc2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:48.654 [INFO][4502] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-pgc2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:48.707 [INFO][4536] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" HandleID="k8s-pod-network.d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:48.723 [INFO][4536] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" HandleID="k8s-pod-network.d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005cced0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-595fc8fb58-pgc2v", "timestamp":"2024-11-12 20:54:48.707583279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:48.723 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:48.910 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:48.910 [INFO][4536] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:48.913 [INFO][4536] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" host="localhost" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.165 [INFO][4536] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.171 [INFO][4536] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.175 [INFO][4536] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.181 [INFO][4536] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.181 [INFO][4536] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" host="localhost" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.182 [INFO][4536] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65 Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.396 [INFO][4536] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" host="localhost" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.464 [INFO][4536] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" host="localhost" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.464 [INFO][4536] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" host="localhost" Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.464 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:54:49.590614 containerd[1456]: 2024-11-12 20:54:49.464 [INFO][4536] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" HandleID="k8s-pod-network.d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:49.594710 containerd[1456]: 2024-11-12 20:54:49.468 [INFO][4502] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-pgc2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0", GenerateName:"calico-apiserver-595fc8fb58-", Namespace:"calico-apiserver", SelfLink:"", UID:"73906cf1-8520-41f1-9a4b-beeed90ae509", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595fc8fb58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-595fc8fb58-pgc2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6012a33e05e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:49.594710 containerd[1456]: 2024-11-12 20:54:49.468 [INFO][4502] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-pgc2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:49.594710 containerd[1456]: 2024-11-12 20:54:49.468 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6012a33e05e ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-pgc2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:49.594710 containerd[1456]: 2024-11-12 20:54:49.471 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-pgc2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:49.594710 containerd[1456]: 2024-11-12 20:54:49.471 [INFO][4502] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-pgc2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0", GenerateName:"calico-apiserver-595fc8fb58-", Namespace:"calico-apiserver", SelfLink:"", UID:"73906cf1-8520-41f1-9a4b-beeed90ae509", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595fc8fb58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65", Pod:"calico-apiserver-595fc8fb58-pgc2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6012a33e05e", MAC:"6e:55:ea:6c:50:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:54:49.594710 containerd[1456]: 2024-11-12 20:54:49.584 [INFO][4502] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65" Namespace="calico-apiserver" Pod="calico-apiserver-595fc8fb58-pgc2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:54:49.604739 kubelet[2524]: E1112 20:54:49.604698 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:49.622050 kubelet[2524]: I1112 20:54:49.619976 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cc5bdbb85-fd6v2" podStartSLOduration=26.232916913 podStartE2EDuration="28.619953703s" podCreationTimestamp="2024-11-12 20:54:21 +0000 UTC" firstStartedPulling="2024-11-12 20:54:45.621461944 +0000 UTC m=+39.673210187" lastFinishedPulling="2024-11-12 20:54:48.008498734 +0000 UTC m=+42.060246977" observedRunningTime="2024-11-12 20:54:49.619697825 +0000 UTC m=+43.671446068" watchObservedRunningTime="2024-11-12 20:54:49.619953703 +0000 UTC m=+43.671701946" Nov 12 20:54:49.631103 systemd[1]: Started cri-containerd-a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38.scope - libcontainer container a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38. Nov 12 20:54:49.644616 containerd[1456]: time="2024-11-12T20:54:49.644520953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:49.644616 containerd[1456]: time="2024-11-12T20:54:49.644597412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:49.644822 containerd[1456]: time="2024-11-12T20:54:49.644613994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:49.644822 containerd[1456]: time="2024-11-12T20:54:49.644704741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:49.645986 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:54:49.675154 systemd[1]: Started cri-containerd-d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65.scope - libcontainer container d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65. Nov 12 20:54:49.686156 containerd[1456]: time="2024-11-12T20:54:49.686004849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6f9q6,Uid:76abd0d6-f821-42df-bb0b-16d0b8a05a4b,Namespace:kube-system,Attempt:1,} returns sandbox id \"a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38\"" Nov 12 20:54:49.687407 kubelet[2524]: E1112 20:54:49.687364 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:49.690247 containerd[1456]: time="2024-11-12T20:54:49.690216053Z" level=info msg="CreateContainer within sandbox \"a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:54:49.699820 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:54:49.729748 containerd[1456]: time="2024-11-12T20:54:49.729170282Z" level=info msg="CreateContainer within sandbox \"a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc475abdcfead6847cd7c2d06100d18fbfba2e11d82b8b816fab159d5c7ca49e\"" Nov 12 20:54:49.730381 containerd[1456]: time="2024-11-12T20:54:49.730345000Z" level=info msg="StartContainer for \"fc475abdcfead6847cd7c2d06100d18fbfba2e11d82b8b816fab159d5c7ca49e\"" Nov 12 20:54:49.741794 containerd[1456]: time="2024-11-12T20:54:49.741519799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595fc8fb58-pgc2v,Uid:73906cf1-8520-41f1-9a4b-beeed90ae509,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65\"" Nov 12 20:54:49.773107 systemd[1]: Started cri-containerd-fc475abdcfead6847cd7c2d06100d18fbfba2e11d82b8b816fab159d5c7ca49e.scope - libcontainer container fc475abdcfead6847cd7c2d06100d18fbfba2e11d82b8b816fab159d5c7ca49e. Nov 12 20:54:49.948457 containerd[1456]: time="2024-11-12T20:54:49.948387296Z" level=info msg="StartContainer for \"fc475abdcfead6847cd7c2d06100d18fbfba2e11d82b8b816fab159d5c7ca49e\" returns successfully" Nov 12 20:54:50.608569 kubelet[2524]: E1112 20:54:50.608511 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:50.609966 kubelet[2524]: I1112 20:54:50.609927 2524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:54:50.726411 kubelet[2524]: I1112 20:54:50.726331 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6f9q6" podStartSLOduration=37.726309539 podStartE2EDuration="37.726309539s" podCreationTimestamp="2024-11-12 20:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:50.725315484 +0000 UTC m=+44.777063727" watchObservedRunningTime="2024-11-12 20:54:50.726309539 +0000 UTC m=+44.778057782" Nov 12 20:54:50.749478 systemd-networkd[1395]: calic98a2aca817: Gained IPv6LL Nov 12 20:54:50.813052 systemd-networkd[1395]: cali6012a33e05e: Gained IPv6LL Nov 12 20:54:51.468719 containerd[1456]: time="2024-11-12T20:54:51.468641513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:51.469855 containerd[1456]: time="2024-11-12T20:54:51.469765689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:54:51.471223 containerd[1456]: time="2024-11-12T20:54:51.471187173Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:51.473460 containerd[1456]: time="2024-11-12T20:54:51.473426629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:51.474187 containerd[1456]: time="2024-11-12T20:54:51.474154154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 3.465473756s" Nov 12 20:54:51.474245 containerd[1456]: time="2024-11-12T20:54:51.474191326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:54:51.475423 containerd[1456]: time="2024-11-12T20:54:51.475398554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:54:51.476677 containerd[1456]: time="2024-11-12T20:54:51.476637213Z" level=info msg="CreateContainer within sandbox \"801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:54:51.502602 containerd[1456]: time="2024-11-12T20:54:51.502530381Z" level=info msg="CreateContainer within sandbox \"801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e575a93b09b4724c72fc6fd2c4128aaa1277c8bb02499816dba1523095c9d1a7\"" Nov 12 20:54:51.503394 containerd[1456]: time="2024-11-12T20:54:51.503336940Z" level=info msg="StartContainer for \"e575a93b09b4724c72fc6fd2c4128aaa1277c8bb02499816dba1523095c9d1a7\"" Nov 12 20:54:51.540046 systemd[1]: Started cri-containerd-e575a93b09b4724c72fc6fd2c4128aaa1277c8bb02499816dba1523095c9d1a7.scope - libcontainer container e575a93b09b4724c72fc6fd2c4128aaa1277c8bb02499816dba1523095c9d1a7. Nov 12 20:54:51.846023 containerd[1456]: time="2024-11-12T20:54:51.845763285Z" level=info msg="StartContainer for \"e575a93b09b4724c72fc6fd2c4128aaa1277c8bb02499816dba1523095c9d1a7\" returns successfully" Nov 12 20:54:51.850041 kubelet[2524]: E1112 20:54:51.850016 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:52.864673 kubelet[2524]: I1112 20:54:52.864070 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-595fc8fb58-zn62j" podStartSLOduration=27.015137757 podStartE2EDuration="31.864047013s" podCreationTimestamp="2024-11-12 20:54:21 +0000 UTC" firstStartedPulling="2024-11-12 20:54:46.626295471 +0000 UTC m=+40.678043714" lastFinishedPulling="2024-11-12 20:54:51.475204727 +0000 UTC m=+45.526952970" observedRunningTime="2024-11-12 20:54:52.863507916 +0000 UTC m=+46.915256159" watchObservedRunningTime="2024-11-12 20:54:52.864047013 +0000 UTC m=+46.915795256" Nov 12 20:54:53.529877 containerd[1456]: time="2024-11-12T20:54:53.529794582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:53.531455 containerd[1456]: time="2024-11-12T20:54:53.531413145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:54:53.532949 containerd[1456]: time="2024-11-12T20:54:53.532910782Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:53.536729 containerd[1456]: time="2024-11-12T20:54:53.536679671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:53.537489 containerd[1456]: time="2024-11-12T20:54:53.537450578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 2.062025202s" Nov 12 20:54:53.537489 containerd[1456]: time="2024-11-12T20:54:53.537481919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:54:53.538479 containerd[1456]: time="2024-11-12T20:54:53.538426223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:54:53.540469 containerd[1456]: time="2024-11-12T20:54:53.540371549Z" level=info msg="CreateContainer within sandbox \"3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:54:53.573467 containerd[1456]: time="2024-11-12T20:54:53.573403680Z" level=info msg="CreateContainer within sandbox \"3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a9ca05512ba3d083259e4b6657b083d2057a7b7c75320d7def0ec61688937bc8\"" Nov 12 20:54:53.577044 containerd[1456]: time="2024-11-12T20:54:53.576117099Z" level=info msg="StartContainer for \"a9ca05512ba3d083259e4b6657b083d2057a7b7c75320d7def0ec61688937bc8\"" Nov 12 20:54:53.624069 systemd[1]: Started cri-containerd-a9ca05512ba3d083259e4b6657b083d2057a7b7c75320d7def0ec61688937bc8.scope - libcontainer container a9ca05512ba3d083259e4b6657b083d2057a7b7c75320d7def0ec61688937bc8. Nov 12 20:54:53.802827 containerd[1456]: time="2024-11-12T20:54:53.802631254Z" level=info msg="StartContainer for \"a9ca05512ba3d083259e4b6657b083d2057a7b7c75320d7def0ec61688937bc8\" returns successfully" Nov 12 20:54:53.969811 containerd[1456]: time="2024-11-12T20:54:53.969740790Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:53.971114 containerd[1456]: time="2024-11-12T20:54:53.971045273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:54:53.973226 containerd[1456]: time="2024-11-12T20:54:53.973195138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 434.727223ms" Nov 12 20:54:53.973287 containerd[1456]: time="2024-11-12T20:54:53.973226989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:54:53.974292 containerd[1456]: time="2024-11-12T20:54:53.974260155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:54:53.975589 containerd[1456]: time="2024-11-12T20:54:53.975518399Z" level=info msg="CreateContainer within sandbox \"d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:54:53.996470 containerd[1456]: time="2024-11-12T20:54:53.996386649Z" level=info msg="CreateContainer within sandbox \"d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cfbc7724d3aeca892b751327a287dbc4843115023abaf62a1c024c1956e90d2f\"" Nov 12 20:54:53.997778 containerd[1456]: time="2024-11-12T20:54:53.997200109Z" level=info msg="StartContainer for \"cfbc7724d3aeca892b751327a287dbc4843115023abaf62a1c024c1956e90d2f\"" Nov 12 20:54:54.038122 systemd[1]: Started cri-containerd-cfbc7724d3aeca892b751327a287dbc4843115023abaf62a1c024c1956e90d2f.scope - libcontainer container cfbc7724d3aeca892b751327a287dbc4843115023abaf62a1c024c1956e90d2f. Nov 12 20:54:54.085992 containerd[1456]: time="2024-11-12T20:54:54.085729136Z" level=info msg="StartContainer for \"cfbc7724d3aeca892b751327a287dbc4843115023abaf62a1c024c1956e90d2f\" returns successfully" Nov 12 20:54:54.491998 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:60570.service - OpenSSH per-connection server daemon (10.0.0.1:60570). Nov 12 20:54:54.590532 sshd[4848]: Accepted publickey for core from 10.0.0.1 port 60570 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:54.592832 sshd[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:54.597807 systemd-logind[1438]: New session 10 of user core. Nov 12 20:54:54.603053 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:54:54.767502 sshd[4848]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:54.772083 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:60570.service: Deactivated successfully. Nov 12 20:54:54.774830 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:54:54.775948 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:54:54.777127 systemd-logind[1438]: Removed session 10. Nov 12 20:54:54.898993 kubelet[2524]: I1112 20:54:54.897670 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-595fc8fb58-pgc2v" podStartSLOduration=29.668128077 podStartE2EDuration="33.897647589s" podCreationTimestamp="2024-11-12 20:54:21 +0000 UTC" firstStartedPulling="2024-11-12 20:54:49.744542388 +0000 UTC m=+43.796290631" lastFinishedPulling="2024-11-12 20:54:53.9740619 +0000 UTC m=+48.025810143" observedRunningTime="2024-11-12 20:54:54.881485058 +0000 UTC m=+48.933233302" watchObservedRunningTime="2024-11-12 20:54:54.897647589 +0000 UTC m=+48.949395832" Nov 12 20:54:56.336706 containerd[1456]: time="2024-11-12T20:54:56.336644175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:56.338837 containerd[1456]: time="2024-11-12T20:54:56.338729295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:54:56.340590 containerd[1456]: time="2024-11-12T20:54:56.340491278Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:56.345162 containerd[1456]: time="2024-11-12T20:54:56.345101100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:56.346368 containerd[1456]: time="2024-11-12T20:54:56.346307958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 2.372006462s" Nov 12 20:54:56.346368 containerd[1456]: time="2024-11-12T20:54:56.346353686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:54:56.349257 containerd[1456]: time="2024-11-12T20:54:56.349198488Z" level=info msg="CreateContainer within sandbox \"3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:54:56.368420 containerd[1456]: time="2024-11-12T20:54:56.368368730Z" level=info msg="CreateContainer within sandbox \"3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8c763ad270b3a0f87611ed2ea6e54307d6dfcd45d65364aa23d37a0abcb99c93\"" Nov 12 20:54:56.370389 containerd[1456]: time="2024-11-12T20:54:56.369275707Z" level=info msg="StartContainer for \"8c763ad270b3a0f87611ed2ea6e54307d6dfcd45d65364aa23d37a0abcb99c93\"" Nov 12 20:54:56.409185 systemd[1]: Started cri-containerd-8c763ad270b3a0f87611ed2ea6e54307d6dfcd45d65364aa23d37a0abcb99c93.scope - libcontainer container 8c763ad270b3a0f87611ed2ea6e54307d6dfcd45d65364aa23d37a0abcb99c93. Nov 12 20:54:56.455513 containerd[1456]: time="2024-11-12T20:54:56.455321770Z" level=info msg="StartContainer for \"8c763ad270b3a0f87611ed2ea6e54307d6dfcd45d65364aa23d37a0abcb99c93\" returns successfully" Nov 12 20:54:56.923655 kubelet[2524]: I1112 20:54:56.923587 2524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gzdb2" podStartSLOduration=27.189805016 podStartE2EDuration="35.923564099s" podCreationTimestamp="2024-11-12 20:54:21 +0000 UTC" firstStartedPulling="2024-11-12 20:54:47.613757246 +0000 UTC m=+41.665505489" lastFinishedPulling="2024-11-12 20:54:56.347516329 +0000 UTC m=+50.399264572" observedRunningTime="2024-11-12 20:54:56.923229721 +0000 UTC m=+50.974977984" watchObservedRunningTime="2024-11-12 20:54:56.923564099 +0000 UTC m=+50.975312343" Nov 12 20:54:57.166471 kubelet[2524]: I1112 20:54:57.166417 2524 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:54:57.166471 kubelet[2524]: I1112 20:54:57.166463 2524 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:54:59.779495 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:38520.service - OpenSSH per-connection server daemon (10.0.0.1:38520). Nov 12 20:54:59.819829 sshd[4925]: Accepted publickey for core from 10.0.0.1 port 38520 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:59.821650 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:59.825948 systemd-logind[1438]: New session 11 of user core. Nov 12 20:54:59.835043 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:54:59.973570 sshd[4925]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:59.977754 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:38520.service: Deactivated successfully. Nov 12 20:54:59.979640 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:54:59.980232 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:54:59.981069 systemd-logind[1438]: Removed session 11. Nov 12 20:55:02.197673 kubelet[2524]: I1112 20:55:02.197602 2524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:04.986716 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:38522.service - OpenSSH per-connection server daemon (10.0.0.1:38522). Nov 12 20:55:05.043112 sshd[4980]: Accepted publickey for core from 10.0.0.1 port 38522 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:05.044820 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:05.049677 systemd-logind[1438]: New session 12 of user core. Nov 12 20:55:05.058101 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:55:05.174655 sshd[4980]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:05.184772 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:38522.service: Deactivated successfully. Nov 12 20:55:05.186788 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:55:05.188457 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:55:05.190050 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:38528.service - OpenSSH per-connection server daemon (10.0.0.1:38528). Nov 12 20:55:05.191159 systemd-logind[1438]: Removed session 12. Nov 12 20:55:05.238715 sshd[4995]: Accepted publickey for core from 10.0.0.1 port 38528 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:05.240471 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:05.244608 systemd-logind[1438]: New session 13 of user core. Nov 12 20:55:05.252071 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:55:05.411673 sshd[4995]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:05.425476 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:38528.service: Deactivated successfully. Nov 12 20:55:05.427628 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:55:05.431013 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:55:05.444249 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:38544.service - OpenSSH per-connection server daemon (10.0.0.1:38544). Nov 12 20:55:05.446368 systemd-logind[1438]: Removed session 13. Nov 12 20:55:05.478107 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 38544 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:05.480179 sshd[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:05.485391 systemd-logind[1438]: New session 14 of user core. Nov 12 20:55:05.495165 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:55:05.615691 sshd[5008]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:05.620616 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:38544.service: Deactivated successfully. Nov 12 20:55:05.623084 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:55:05.623823 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:55:05.625081 systemd-logind[1438]: Removed session 14. Nov 12 20:55:06.098432 containerd[1456]: time="2024-11-12T20:55:06.098391439Z" level=info msg="StopPodSandbox for \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\"" Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.153 [WARNING][5045] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0", GenerateName:"calico-kube-controllers-6cc5bdbb85-", Namespace:"calico-system", SelfLink:"", UID:"4a476cf6-4cc5-49bc-ac05-b79f0197d4f4", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cc5bdbb85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6", Pod:"calico-kube-controllers-6cc5bdbb85-fd6v2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45e726cee75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.154 [INFO][5045] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.154 [INFO][5045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" iface="eth0" netns="" Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.154 [INFO][5045] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.154 [INFO][5045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.179 [INFO][5052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" HandleID="k8s-pod-network.2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.179 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.179 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.185 [WARNING][5052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" HandleID="k8s-pod-network.2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.185 [INFO][5052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" HandleID="k8s-pod-network.2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.186 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:06.191695 containerd[1456]: 2024-11-12 20:55:06.188 [INFO][5045] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:55:06.192237 containerd[1456]: time="2024-11-12T20:55:06.191759951Z" level=info msg="TearDown network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\" successfully" Nov 12 20:55:06.192237 containerd[1456]: time="2024-11-12T20:55:06.191799266Z" level=info msg="StopPodSandbox for \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\" returns successfully" Nov 12 20:55:06.192593 containerd[1456]: time="2024-11-12T20:55:06.192557146Z" level=info msg="RemovePodSandbox for \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\"" Nov 12 20:55:06.195011 containerd[1456]: time="2024-11-12T20:55:06.194946107Z" level=info msg="Forcibly stopping sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\"" Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.306 [WARNING][5074] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0", GenerateName:"calico-kube-controllers-6cc5bdbb85-", Namespace:"calico-system", SelfLink:"", UID:"4a476cf6-4cc5-49bc-ac05-b79f0197d4f4", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cc5bdbb85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31d42bcaf3d7e156d15848f7e0dd5544750add23720b6110c95a7a20f228b2a6", Pod:"calico-kube-controllers-6cc5bdbb85-fd6v2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45e726cee75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.307 [INFO][5074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.307 [INFO][5074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" iface="eth0" netns="" Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.307 [INFO][5074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.307 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.338 [INFO][5081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" HandleID="k8s-pod-network.2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.338 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.338 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.344 [WARNING][5081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" HandleID="k8s-pod-network.2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.344 [INFO][5081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" HandleID="k8s-pod-network.2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Workload="localhost-k8s-calico--kube--controllers--6cc5bdbb85--fd6v2-eth0" Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.345 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:06.350135 containerd[1456]: 2024-11-12 20:55:06.347 [INFO][5074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b" Nov 12 20:55:06.350135 containerd[1456]: time="2024-11-12T20:55:06.350094607Z" level=info msg="TearDown network for sandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\" successfully" Nov 12 20:55:06.373879 containerd[1456]: time="2024-11-12T20:55:06.373803316Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:06.374088 containerd[1456]: time="2024-11-12T20:55:06.373935712Z" level=info msg="RemovePodSandbox \"2775843401601b553540c515d1fc2cd9a402c5f9868c26ca6ac5910ef58ba46b\" returns successfully" Nov 12 20:55:06.374731 containerd[1456]: time="2024-11-12T20:55:06.374653313Z" level=info msg="StopPodSandbox for \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\"" Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.410 [WARNING][5105] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0", GenerateName:"calico-apiserver-595fc8fb58-", Namespace:"calico-apiserver", SelfLink:"", UID:"e60f6ba4-080e-47c8-9607-4ea565272f92", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595fc8fb58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663", Pod:"calico-apiserver-595fc8fb58-zn62j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia416421b2af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.411 [INFO][5105] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.411 [INFO][5105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" iface="eth0" netns="" Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.411 [INFO][5105] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.411 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.434 [INFO][5112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" HandleID="k8s-pod-network.3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.434 [INFO][5112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.434 [INFO][5112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.439 [WARNING][5112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" HandleID="k8s-pod-network.3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.439 [INFO][5112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" HandleID="k8s-pod-network.3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.441 [INFO][5112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:06.445818 containerd[1456]: 2024-11-12 20:55:06.443 [INFO][5105] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:55:06.446288 containerd[1456]: time="2024-11-12T20:55:06.445853383Z" level=info msg="TearDown network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\" successfully" Nov 12 20:55:06.446288 containerd[1456]: time="2024-11-12T20:55:06.445882789Z" level=info msg="StopPodSandbox for \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\" returns successfully" Nov 12 20:55:06.446520 containerd[1456]: time="2024-11-12T20:55:06.446488195Z" level=info msg="RemovePodSandbox for \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\"" Nov 12 20:55:06.446570 containerd[1456]: time="2024-11-12T20:55:06.446529435Z" level=info msg="Forcibly stopping sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\"" Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.485 [WARNING][5136] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0", GenerateName:"calico-apiserver-595fc8fb58-", Namespace:"calico-apiserver", SelfLink:"", UID:"e60f6ba4-080e-47c8-9607-4ea565272f92", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595fc8fb58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"801f6791a3182e0dd13fd371ee902ba3e12d4cba966a99ba305e25bb22490663", Pod:"calico-apiserver-595fc8fb58-zn62j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia416421b2af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.485 [INFO][5136] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.485 [INFO][5136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" iface="eth0" netns="" Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.485 [INFO][5136] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.485 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.508 [INFO][5143] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" HandleID="k8s-pod-network.3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.508 [INFO][5143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.508 [INFO][5143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.514 [WARNING][5143] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" HandleID="k8s-pod-network.3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.514 [INFO][5143] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" HandleID="k8s-pod-network.3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Workload="localhost-k8s-calico--apiserver--595fc8fb58--zn62j-eth0" Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.516 [INFO][5143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:06.521693 containerd[1456]: 2024-11-12 20:55:06.518 [INFO][5136] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8" Nov 12 20:55:06.522390 containerd[1456]: time="2024-11-12T20:55:06.521745178Z" level=info msg="TearDown network for sandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\" successfully" Nov 12 20:55:06.527661 containerd[1456]: time="2024-11-12T20:55:06.527588200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:06.527851 containerd[1456]: time="2024-11-12T20:55:06.527687602Z" level=info msg="RemovePodSandbox \"3dcdbe1e6696fb037d767d34c984bcaf52cf6f5e1cd1f9ecd2cd2328d3abafb8\" returns successfully" Nov 12 20:55:06.528464 containerd[1456]: time="2024-11-12T20:55:06.528411706Z" level=info msg="StopPodSandbox for \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\"" Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.572 [WARNING][5166] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzdb2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5", Pod:"csi-node-driver-gzdb2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib59f19c76d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.573 [INFO][5166] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.573 [INFO][5166] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" iface="eth0" netns="" Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.573 [INFO][5166] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.573 [INFO][5166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.595 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" HandleID="k8s-pod-network.8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.595 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.595 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.601 [WARNING][5174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" HandleID="k8s-pod-network.8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.601 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" HandleID="k8s-pod-network.8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.603 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:06.609728 containerd[1456]: 2024-11-12 20:55:06.606 [INFO][5166] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:55:06.609728 containerd[1456]: time="2024-11-12T20:55:06.609691207Z" level=info msg="TearDown network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\" successfully" Nov 12 20:55:06.609728 containerd[1456]: time="2024-11-12T20:55:06.609725192Z" level=info msg="StopPodSandbox for \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\" returns successfully" Nov 12 20:55:06.610592 containerd[1456]: time="2024-11-12T20:55:06.610472492Z" level=info msg="RemovePodSandbox for \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\"" Nov 12 20:55:06.610672 containerd[1456]: time="2024-11-12T20:55:06.610594056Z" level=info msg="Forcibly stopping sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\"" Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.648 [WARNING][5196] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzdb2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bb7ebc4-d76d-43f4-9467-1cf6406d5a57", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3849e1622397d0a6c126b52b414ba3b1339f42b9f4814c24be88e8866d7ac0e5", Pod:"csi-node-driver-gzdb2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib59f19c76d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.648 [INFO][5196] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.648 [INFO][5196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" iface="eth0" netns="" Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.648 [INFO][5196] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.648 [INFO][5196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.692 [INFO][5203] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" HandleID="k8s-pod-network.8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.693 [INFO][5203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.693 [INFO][5203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.699 [WARNING][5203] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" HandleID="k8s-pod-network.8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.699 [INFO][5203] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" HandleID="k8s-pod-network.8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Workload="localhost-k8s-csi--node--driver--gzdb2-eth0" Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.701 [INFO][5203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:06.706020 containerd[1456]: 2024-11-12 20:55:06.703 [INFO][5196] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc" Nov 12 20:55:06.706451 containerd[1456]: time="2024-11-12T20:55:06.706091819Z" level=info msg="TearDown network for sandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\" successfully" Nov 12 20:55:06.727811 containerd[1456]: time="2024-11-12T20:55:06.727728217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:06.727811 containerd[1456]: time="2024-11-12T20:55:06.727822419Z" level=info msg="RemovePodSandbox \"8bc57f5d0bf143a831f0027e067cedd49e036b969c1fdc00a39557892c5c10cc\" returns successfully" Nov 12 20:55:06.728577 containerd[1456]: time="2024-11-12T20:55:06.728487649Z" level=info msg="StopPodSandbox for \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\"" Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.768 [WARNING][5226] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0", GenerateName:"calico-apiserver-595fc8fb58-", Namespace:"calico-apiserver", SelfLink:"", UID:"73906cf1-8520-41f1-9a4b-beeed90ae509", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595fc8fb58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65", Pod:"calico-apiserver-595fc8fb58-pgc2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6012a33e05e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.789 [INFO][5226] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.789 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" iface="eth0" netns="" Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.789 [INFO][5226] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.789 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.813 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" HandleID="k8s-pod-network.c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.813 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.813 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.823 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" HandleID="k8s-pod-network.c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.823 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" HandleID="k8s-pod-network.c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.824 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:06.830858 containerd[1456]: 2024-11-12 20:55:06.827 [INFO][5226] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:55:06.831604 containerd[1456]: time="2024-11-12T20:55:06.831000873Z" level=info msg="TearDown network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\" successfully" Nov 12 20:55:06.831604 containerd[1456]: time="2024-11-12T20:55:06.831037584Z" level=info msg="StopPodSandbox for \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\" returns successfully" Nov 12 20:55:06.831660 containerd[1456]: time="2024-11-12T20:55:06.831641216Z" level=info msg="RemovePodSandbox for \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\"" Nov 12 20:55:06.831689 containerd[1456]: time="2024-11-12T20:55:06.831675622Z" level=info msg="Forcibly stopping sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\"" Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.879 [WARNING][5256] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0", GenerateName:"calico-apiserver-595fc8fb58-", Namespace:"calico-apiserver", SelfLink:"", UID:"73906cf1-8520-41f1-9a4b-beeed90ae509", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595fc8fb58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5ecfa2a9e19337a2d4220b1fae1feff88693776f1438be4c4fa329155bd8f65", Pod:"calico-apiserver-595fc8fb58-pgc2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6012a33e05e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.879 [INFO][5256] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.879 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" iface="eth0" netns="" Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.879 [INFO][5256] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.879 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.901 [INFO][5263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" HandleID="k8s-pod-network.c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.901 [INFO][5263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.901 [INFO][5263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.907 [WARNING][5263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" HandleID="k8s-pod-network.c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.907 [INFO][5263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" HandleID="k8s-pod-network.c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Workload="localhost-k8s-calico--apiserver--595fc8fb58--pgc2v-eth0" Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.909 [INFO][5263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:06.914176 containerd[1456]: 2024-11-12 20:55:06.911 [INFO][5256] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4" Nov 12 20:55:06.914697 containerd[1456]: time="2024-11-12T20:55:06.914240628Z" level=info msg="TearDown network for sandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\" successfully" Nov 12 20:55:07.064619 containerd[1456]: time="2024-11-12T20:55:07.064533757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:07.064814 containerd[1456]: time="2024-11-12T20:55:07.064639490Z" level=info msg="RemovePodSandbox \"c9a12d3406afa1dc0b67bfc6075e48442975f6136881769a20b411b5309fb9d4\" returns successfully" Nov 12 20:55:07.065369 containerd[1456]: time="2024-11-12T20:55:07.065341582Z" level=info msg="StopPodSandbox for \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\"" Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.106 [WARNING][5286] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--brc2t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"97b0959e-83eb-40de-b1d7-86e881d338a7", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb", Pod:"coredns-6f6b679f8f-brc2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5f427d7e84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.106 [INFO][5286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.106 [INFO][5286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" iface="eth0" netns="" Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.106 [INFO][5286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.106 [INFO][5286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.129 [INFO][5293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" HandleID="k8s-pod-network.a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.130 [INFO][5293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.130 [INFO][5293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.135 [WARNING][5293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" HandleID="k8s-pod-network.a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.135 [INFO][5293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" HandleID="k8s-pod-network.a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.137 [INFO][5293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:07.142201 containerd[1456]: 2024-11-12 20:55:07.139 [INFO][5286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:55:07.143156 containerd[1456]: time="2024-11-12T20:55:07.142256101Z" level=info msg="TearDown network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\" successfully" Nov 12 20:55:07.143156 containerd[1456]: time="2024-11-12T20:55:07.142287782Z" level=info msg="StopPodSandbox for \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\" returns successfully" Nov 12 20:55:07.143156 containerd[1456]: time="2024-11-12T20:55:07.142922163Z" level=info msg="RemovePodSandbox for \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\"" Nov 12 20:55:07.143156 containerd[1456]: time="2024-11-12T20:55:07.142952582Z" level=info msg="Forcibly stopping sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\"" Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.185 [WARNING][5316] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--brc2t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"97b0959e-83eb-40de-b1d7-86e881d338a7", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d7523078f0bddf82be0b35995ffce68ee8ab69dbb429602b0a2d399a1bb8ccb", Pod:"coredns-6f6b679f8f-brc2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5f427d7e84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.186 [INFO][5316] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.186 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" iface="eth0" netns="" Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.186 [INFO][5316] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.186 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.213 [INFO][5323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" HandleID="k8s-pod-network.a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.214 [INFO][5323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.214 [INFO][5323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.221 [WARNING][5323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" HandleID="k8s-pod-network.a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.221 [INFO][5323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" HandleID="k8s-pod-network.a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Workload="localhost-k8s-coredns--6f6b679f8f--brc2t-eth0" Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.223 [INFO][5323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:07.227811 containerd[1456]: 2024-11-12 20:55:07.225 [INFO][5316] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4" Nov 12 20:55:07.227811 containerd[1456]: time="2024-11-12T20:55:07.227782668Z" level=info msg="TearDown network for sandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\" successfully" Nov 12 20:55:07.404814 containerd[1456]: time="2024-11-12T20:55:07.404748372Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:07.405032 containerd[1456]: time="2024-11-12T20:55:07.404845148Z" level=info msg="RemovePodSandbox \"a69cd84eaa6480f1c558e87c9466c1c65ed38542316bb3ba56f209de0a717ef4\" returns successfully" Nov 12 20:55:07.405388 containerd[1456]: time="2024-11-12T20:55:07.405358606Z" level=info msg="StopPodSandbox for \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\"" Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.450 [WARNING][5346] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"76abd0d6-f821-42df-bb0b-16d0b8a05a4b", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38", Pod:"coredns-6f6b679f8f-6f9q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic98a2aca817", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.450 [INFO][5346] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.450 [INFO][5346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" iface="eth0" netns="" Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.450 [INFO][5346] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.450 [INFO][5346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.474 [INFO][5353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" HandleID="k8s-pod-network.b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.474 [INFO][5353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.474 [INFO][5353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.480 [WARNING][5353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" HandleID="k8s-pod-network.b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.480 [INFO][5353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" HandleID="k8s-pod-network.b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.482 [INFO][5353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:07.488424 containerd[1456]: 2024-11-12 20:55:07.485 [INFO][5346] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:55:07.488424 containerd[1456]: time="2024-11-12T20:55:07.488386202Z" level=info msg="TearDown network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\" successfully" Nov 12 20:55:07.488424 containerd[1456]: time="2024-11-12T20:55:07.488417253Z" level=info msg="StopPodSandbox for \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\" returns successfully" Nov 12 20:55:07.490177 containerd[1456]: time="2024-11-12T20:55:07.489601202Z" level=info msg="RemovePodSandbox for \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\"" Nov 12 20:55:07.490177 containerd[1456]: time="2024-11-12T20:55:07.489647981Z" level=info msg="Forcibly stopping sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\"" Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.702 [WARNING][5375] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"76abd0d6-f821-42df-bb0b-16d0b8a05a4b", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8701668849c7f30e7852bc3a06b360ddb3df6becdb0a0f9cd551a9f77eebc38", Pod:"coredns-6f6b679f8f-6f9q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic98a2aca817", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.702 [INFO][5375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.702 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" iface="eth0" netns="" Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.702 [INFO][5375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.702 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.725 [INFO][5382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" HandleID="k8s-pod-network.b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.725 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.725 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.751 [WARNING][5382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" HandleID="k8s-pod-network.b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.751 [INFO][5382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" HandleID="k8s-pod-network.b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Workload="localhost-k8s-coredns--6f6b679f8f--6f9q6-eth0" Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.752 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:07.757521 containerd[1456]: 2024-11-12 20:55:07.755 [INFO][5375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699" Nov 12 20:55:07.757521 containerd[1456]: time="2024-11-12T20:55:07.757496372Z" level=info msg="TearDown network for sandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\" successfully" Nov 12 20:55:07.947514 containerd[1456]: time="2024-11-12T20:55:07.947436370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:07.947694 containerd[1456]: time="2024-11-12T20:55:07.947526423Z" level=info msg="RemovePodSandbox \"b64d7ccf71a6f0e16f3b2b53199edcf51268138c9eeb4929fe4cdb39e484c699\" returns successfully" Nov 12 20:55:10.630312 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:48108.service - OpenSSH per-connection server daemon (10.0.0.1:48108). Nov 12 20:55:10.686933 sshd[5392]: Accepted publickey for core from 10.0.0.1 port 48108 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:10.688892 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:10.693187 systemd-logind[1438]: New session 15 of user core. Nov 12 20:55:10.703049 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:55:10.839543 sshd[5392]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:10.844043 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:48108.service: Deactivated successfully. Nov 12 20:55:10.846190 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:55:10.846974 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:55:10.847948 systemd-logind[1438]: Removed session 15. Nov 12 20:55:15.853649 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:46632.service - OpenSSH per-connection server daemon (10.0.0.1:46632). Nov 12 20:55:15.895535 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 46632 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:15.897539 sshd[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:15.902782 systemd-logind[1438]: New session 16 of user core. Nov 12 20:55:15.912306 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:55:16.040402 sshd[5414]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:16.046151 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:46632.service: Deactivated successfully. Nov 12 20:55:16.049129 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:55:16.049999 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:55:16.051395 systemd-logind[1438]: Removed session 16. Nov 12 20:55:18.075332 kubelet[2524]: E1112 20:55:18.075264 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:20.075713 kubelet[2524]: E1112 20:55:20.075551 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:21.054197 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:46640.service - OpenSSH per-connection server daemon (10.0.0.1:46640). Nov 12 20:55:21.115288 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 46640 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:21.117247 sshd[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:21.122301 systemd-logind[1438]: New session 17 of user core. Nov 12 20:55:21.129072 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:55:21.246546 sshd[5450]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:21.250832 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:46640.service: Deactivated successfully. Nov 12 20:55:21.253148 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:55:21.253991 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:55:21.255078 systemd-logind[1438]: Removed session 17. Nov 12 20:55:26.258845 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:49636.service - OpenSSH per-connection server daemon (10.0.0.1:49636). Nov 12 20:55:26.296592 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 49636 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:26.298608 sshd[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:26.303479 systemd-logind[1438]: New session 18 of user core. Nov 12 20:55:26.317140 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:55:26.444892 sshd[5470]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:26.451323 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:49636.service: Deactivated successfully. Nov 12 20:55:26.453572 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:55:26.454255 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:55:26.455347 systemd-logind[1438]: Removed session 18. Nov 12 20:55:31.460535 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:49646.service - OpenSSH per-connection server daemon (10.0.0.1:49646). Nov 12 20:55:31.521874 sshd[5486]: Accepted publickey for core from 10.0.0.1 port 49646 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:31.523712 sshd[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:31.527835 systemd-logind[1438]: New session 19 of user core. Nov 12 20:55:31.537037 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:55:31.665848 sshd[5486]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:31.675053 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:49646.service: Deactivated successfully. Nov 12 20:55:31.676958 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:55:31.678489 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:55:31.684531 systemd[1]: Started sshd@20-10.0.0.136:22-10.0.0.1:49660.service - OpenSSH per-connection server daemon (10.0.0.1:49660). Nov 12 20:55:31.685576 systemd-logind[1438]: Removed session 19. Nov 12 20:55:31.739655 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 49660 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:31.741576 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:31.745936 systemd-logind[1438]: New session 20 of user core. Nov 12 20:55:31.756031 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:55:32.456977 sshd[5500]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:32.469007 systemd[1]: sshd@20-10.0.0.136:22-10.0.0.1:49660.service: Deactivated successfully. Nov 12 20:55:32.471705 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:55:32.473831 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:55:32.480258 systemd[1]: Started sshd@21-10.0.0.136:22-10.0.0.1:49668.service - OpenSSH per-connection server daemon (10.0.0.1:49668). Nov 12 20:55:32.481357 systemd-logind[1438]: Removed session 20. Nov 12 20:55:32.534292 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 49668 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:32.536237 sshd[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:32.541862 systemd-logind[1438]: New session 21 of user core. Nov 12 20:55:32.549056 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:55:34.636990 sshd[5531]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:34.645995 systemd[1]: Started sshd@22-10.0.0.136:22-10.0.0.1:49732.service - OpenSSH per-connection server daemon (10.0.0.1:49732). Nov 12 20:55:34.648518 systemd[1]: sshd@21-10.0.0.136:22-10.0.0.1:49668.service: Deactivated successfully. Nov 12 20:55:34.651249 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:55:34.653546 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:55:34.655209 systemd-logind[1438]: Removed session 21. Nov 12 20:55:34.693326 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 49732 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:34.695091 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:34.699570 systemd-logind[1438]: New session 22 of user core. Nov 12 20:55:34.707044 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:55:35.234878 sshd[5551]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:35.247867 systemd[1]: sshd@22-10.0.0.136:22-10.0.0.1:49732.service: Deactivated successfully. Nov 12 20:55:35.250110 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:55:35.252689 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:55:35.261531 systemd[1]: Started sshd@23-10.0.0.136:22-10.0.0.1:49746.service - OpenSSH per-connection server daemon (10.0.0.1:49746). Nov 12 20:55:35.262722 systemd-logind[1438]: Removed session 22. Nov 12 20:55:35.294497 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 49746 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:35.296388 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:35.301132 systemd-logind[1438]: New session 23 of user core. Nov 12 20:55:35.309313 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:55:35.433893 sshd[5566]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:35.439140 systemd[1]: sshd@23-10.0.0.136:22-10.0.0.1:49746.service: Deactivated successfully. Nov 12 20:55:35.441869 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:55:35.442648 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:55:35.443800 systemd-logind[1438]: Removed session 23. Nov 12 20:55:35.485944 systemd-journald[1124]: Under memory pressure, flushing caches. Nov 12 20:55:37.075036 kubelet[2524]: E1112 20:55:37.074965 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:40.453078 systemd[1]: Started sshd@24-10.0.0.136:22-10.0.0.1:54670.service - OpenSSH per-connection server daemon (10.0.0.1:54670). Nov 12 20:55:40.490188 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 54670 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:40.491879 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:40.496110 systemd-logind[1438]: New session 24 of user core. Nov 12 20:55:40.505056 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:55:40.622081 sshd[5580]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:40.626268 systemd[1]: sshd@24-10.0.0.136:22-10.0.0.1:54670.service: Deactivated successfully. Nov 12 20:55:40.628310 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:55:40.628929 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:55:40.629819 systemd-logind[1438]: Removed session 24. Nov 12 20:55:45.075369 kubelet[2524]: E1112 20:55:45.075309 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:45.634138 systemd[1]: Started sshd@25-10.0.0.136:22-10.0.0.1:45012.service - OpenSSH per-connection server daemon (10.0.0.1:45012). Nov 12 20:55:45.669011 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 45012 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:45.670569 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:45.674505 systemd-logind[1438]: New session 25 of user core. Nov 12 20:55:45.684041 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:55:46.405268 sshd[5614]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:46.410590 systemd[1]: sshd@25-10.0.0.136:22-10.0.0.1:45012.service: Deactivated successfully. Nov 12 20:55:46.412849 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:55:46.413463 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:55:46.414646 systemd-logind[1438]: Removed session 25. Nov 12 20:55:46.415647 kubelet[2524]: E1112 20:55:46.415575 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:51.421402 systemd[1]: Started sshd@26-10.0.0.136:22-10.0.0.1:45026.service - OpenSSH per-connection server daemon (10.0.0.1:45026). Nov 12 20:55:51.459075 sshd[5657]: Accepted publickey for core from 10.0.0.1 port 45026 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:51.461011 sshd[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:51.465034 systemd-logind[1438]: New session 26 of user core. Nov 12 20:55:51.475166 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:55:51.591588 sshd[5657]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:51.595829 systemd[1]: sshd@26-10.0.0.136:22-10.0.0.1:45026.service: Deactivated successfully. Nov 12 20:55:51.597982 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:55:51.598628 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:55:51.599759 systemd-logind[1438]: Removed session 26. Nov 12 20:55:56.606470 systemd[1]: Started sshd@27-10.0.0.136:22-10.0.0.1:34662.service - OpenSSH per-connection server daemon (10.0.0.1:34662). Nov 12 20:55:56.649281 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 34662 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:56.651302 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:56.655936 systemd-logind[1438]: New session 27 of user core. Nov 12 20:55:56.665106 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:55:56.874850 sshd[5671]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:56.880047 systemd[1]: sshd@27-10.0.0.136:22-10.0.0.1:34662.service: Deactivated successfully. Nov 12 20:55:56.882989 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:55:56.883801 systemd-logind[1438]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:55:56.884790 systemd-logind[1438]: Removed session 27. Nov 12 20:55:58.074846 kubelet[2524]: E1112 20:55:58.074805 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:01.891597 systemd[1]: Started sshd@28-10.0.0.136:22-10.0.0.1:34668.service - OpenSSH per-connection server daemon (10.0.0.1:34668). Nov 12 20:56:01.932788 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 34668 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:01.934651 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:01.939560 systemd-logind[1438]: New session 28 of user core. Nov 12 20:56:01.950140 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:56:02.080585 sshd[5686]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:02.085467 systemd[1]: sshd@28-10.0.0.136:22-10.0.0.1:34668.service: Deactivated successfully. Nov 12 20:56:02.087878 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:56:02.088737 systemd-logind[1438]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:56:02.089784 systemd-logind[1438]: Removed session 28. Nov 12 20:56:05.075007 kubelet[2524]: E1112 20:56:05.074961 2524 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:07.096866 systemd[1]: Started sshd@29-10.0.0.136:22-10.0.0.1:42996.service - OpenSSH per-connection server daemon (10.0.0.1:42996). Nov 12 20:56:07.132894 sshd[5731]: Accepted publickey for core from 10.0.0.1 port 42996 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:07.134585 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:07.139059 systemd-logind[1438]: New session 29 of user core. Nov 12 20:56:07.146060 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 20:56:07.496935 sshd[5731]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:07.500867 systemd[1]: sshd@29-10.0.0.136:22-10.0.0.1:42996.service: Deactivated successfully. Nov 12 20:56:07.502863 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 20:56:07.503477 systemd-logind[1438]: Session 29 logged out. Waiting for processes to exit. Nov 12 20:56:07.504364 systemd-logind[1438]: Removed session 29.